This document is related to:

Open-access Decoding Consumer Sentiments: Advanced NLP Techniques for Analyzing Smartphone Reviews

Decodificação de Sentimentos dos Consumidores: Técnicas Avançadas de PLN para a Análise de Avaliações de Smartphones

ABSTRACT

Objectives:  this study aims to bridge the gap in effectively analyzing online consumer feedback on smartphones, which is often voluminous and linguistically complex. The ultimate goal is to provide smartphone manufacturers with actionable insights to refine product features and marketing strategies. We propose a dual-model framework using bidirectional encoder representations from transformers (BERT) and sentence transformers for sentiment analysis and topic modeling, respectively. This approach is intended to enhance the accuracy and depth of consumer sentiment analysis.

Method:  sentiment analysis and topic modeling are applied to a large dataset of smartphone reviews sourced from Kaggle and Amazon. The BERT model is used to understand the context and sentiment of words, while sentence transformers generate embeddings for clustering reviews into thematic topics.

Results:  our analysis revealed strong positive sentiments regarding smartphone performance and user experience, while also identifying concerns about camera and battery life. However, while the model effectively captures positive feedback, it may struggle with negative feedback and especially neutral sentiments, due to the dataset’s bias toward positive reviews.

Conclusions:  the application of BERT and sentence transformers provides a significant technological advancement in the field of text analysis by enhancing the granularity of sentiment detection and offering a robust framework for interpreting complex data sets. This contributes to both theoretical knowledge and practical applications in digital consumer analytics.

Keywords: sentimental analysis; topic modeling; smartphone reviews; natural language processing

RESUMO

Objetivos:   Este estudo busca preencher a lacuna em relação a análises eficazes de avaliações feitas por consumidores online sobre smartphones, que são frequentemente em grande número e complexas do ponto de vista linguístico. Seu objetivo final é oferecer uma compreensão capaz de subsidiar tomadas de decisão e ações práticas em relação ao aprimoramento das características do produto e das estratégias de marketing. Propomos um modelo dual utilizando representações de codificadores bidirecionais de transformadores (BERT) para análise de sentimento e transformadores de frases para a modelagem de tópicos. Esta abordagem visa aumentar a precisão e a profundidade da análise de sentimento do consumidor.

Método:   a análise de sentimento e a modelagem de tópicos são aplicadas a um grande conjunto de dados de avaliações de smartphones obtido nas plataformas Kaggle e Amazon. O modelo BERT é usado para entender o contexto e o sentimento das palavras, enquanto os transformadores de frases geram representações para agrupar avaliações em tópicos temáticos.

Resultados:   A análise revelou fortes sentimentos positivos em relação ao desempenho do smartphone e à experiência do usuário, identificando também preocupações sobre a câmera e a duração da bateria. O modelo captura efetivamente o feedback positivo, mas apresenta dificuldade ao examinar avaliações negativas e sentimentos neutros em particular. Isso ocorre porque o conjunto de dados é enviesado favorecendo avaliações positivas.

Conclusões:   a aplicação dos modelos BERT e transformadores de sentenças representa um avanço tecnológico significativo no campo da análise de texto ao aumentar a granularidade em relação a detecção de sentimentos, bem como ao oferecer uma estrutura robusta para interpretar conjuntos de dados complexos. Nesse sentido, o estudo contribui tanto para o conhecimento teórico quanto para as aplicações práticas em análises envolvendo consumidores online.

Palavras-chave: análise de sentimento; modelagem de tópicos; avaliações de smartphones; processamento de linguagem natural

INTRODUCTION

In today’s digital era, smartphones have become integral to daily life, influencing communication, productivity, and entertainment. As the market continues to be saturated with numerous brands and models, customers increasingly rely on reviews to make informed purchasing decisions. These reviews, rich in insights about smartphone features, performance, and user satisfaction, serve as a critical resource for current and potential consumers. They not only highlight positive aspects but also reveal negative experiences, providing a balanced view that guides consumer choices (An et al., 2023). The impact of customer reviews extends beyond individual purchasing decisions, affecting the broader reputation and financial standing of smartphone manufacturers. For manufacturers and companies, customer feedback encapsulated in reviews is pivotal for maintaining a positive reputation, gaining a competitive edge, and positively influencing potential buyers. If negative reviews are not addressed, they can quickly escalate into significant reputational damage, leading to a decline in consumer trust and satisfaction, tarnishing brand image, and subsequently causing a drop in sales and market share as consumers tend to avoid products with poor ratings (Zhai et al., 2024). This reputational impact can have lasting financial consequences, as restoring brand image often requires substantial investment in marketing and product development.

Furthermore, these reviews are essential for companies to identify areas of dissatisfaction and opportunities for product improvements. In the current marketplace driven by consumer opinion, understanding and leveraging customer reviews can significantly influence the development and refinement of smartphone technologies (Zhang et al., 2023). Given the high stakes involved, understanding the nuanced dynamics of customer reviews on different platforms is essential for companies aiming to safeguard their reputation, avert financial losses, and capitalize on market opportunities.

Despite the known impact of customer reviews, many companies cannot use this feedback productively. The prevailing gaps include a lack of effective tools for analyzing sentiment and feedback trends and a systematic approach to integrating these insights into product development and marketing strategies. Although some studies have focused on sentiment analysis of customer reviews about smartphone brands, such as Chawla et al. (2017), who analyzed a dataset consisting of 1,000 reviews about various smartphone brands like OnePlus, Micromax, Samsung, Nokia, and Lava collected from Amazon, these studies often have limitations. Chawla et al. (2017) used NLP tools like unigrams, bigrams, POS tagging, and machine learning algorithms, primarily naïve Bayes and support vector machine (SVM). Chamlertwat et al. (2012) developed an automated system to collect tweets using the Twitter API, targeting smartphone-related keywords. The system employs SentiWordNet, a sentiment lexicon, to assess the sentiment (positive, negative, and neutral) of words within tweets and uses an SVM classifier with information gain (IG) feature selection for sentiment analysis. Singla et al. (2017) analyzed over 400,000 reviews across approximately 4,500 mobile phone models from Amazon, employing the Syuzhet package, which utilizes the NRC sentiment dictionary to classify sentiments into positive and negative categories. This study also used machine learning models, including naïve Bayes, SVM, and decision tree. Yiran and Srivastava (2019) analyzed Amazon reviews regarding the iPhone X and other unlocked mobile phones using context-based sentiment analysis with a domain-specific lexicon and traditional sentiment scores from SentiWordNet. However, there are some limitations to these methodologies. For example, basic linguistic features in these studies may overlook complex language elements like sarcasm or cultural nuances. The model used by Chawla et al. (2017) had a low accuracy rate of 40%, suggesting issues with feature selection or training. Chamlertwat et al. (2012) methodology primarily captures surface-level sentiments linked to specific features, which may not delve into the deeper reasons behind consumer opinions. Singla’s et al. (2017) model’s effectiveness is highly dependent on the quality of text preprocessing; inadequate preprocessing could result in the retention of irrelevant features. Yiran and Srivastava (2019) methodology typically assigns one aspect per sentence, potentially overlooking sentences that discuss multiple elements, leading to context misinterpretation.

This research aims to bridge existing gaps using sentiment analysis and topic modeling. Sentiment analysis, rooted in computational linguistics and natural language processing (NLP), processes human language to extract underlying sentiments (Pang & Lee, 2008). Topic modeling, based on machine learning and probabilistic modeling, particularly uses latent Dirichlet allocation (LDA) to identify themes within extensive texts (Blei et al., 2003). We propose utilizing advanced deep learning models, specifically the BERT model and sentence transformers, to analyze customer reviews of smartphones. BERT’s bidirectional training effectively captures word nuances and sentiments by analyzing their full context, thereby enhancing the accuracy of sentiment analysis (Devlin et al., 2018; Sun et al., 2019). Sentence transformers complement this by generating sentence-wide embeddings that tackle issues like sarcasm and idioms, which previous models struggled with (Reimers & Gurevych, 2019). Together, these models excel in real-time sentiment analysis, adapt swiftly to language variations, and robustly address the contextual misinterpretation issues (Devlin et al., 2018; Xu et al., 2019) found in prior methodologies as highlighted by Cambria et al. (2013) and Medhat et al. (2014). By doing so, they not only pinpoint prevailing consumer sentiments and topics but also assist smartphone manufacturers in fine-tuning their products to better align with consumer preferences and expectations. The primary objective of this study is to develop and validate a robust analytical framework that can systematically analyze and interpret consumer feedback from online reviews. This framework aims to provide actionable insights that smartphone companies can utilize to enhance their product strategies and operational tactics. Specifically, the study will focus on the sentiment and thematic patterns in customer reviews, aiming to uncover the nuanced influences these factors have on consumer purchasing decisions and brand perception. The ultimate goal is to facilitate a more informed decision-making process for companies, enabling them to improve product features, address consumer grievances effectively, and enhance overall customer satisfaction.

CONTEXT AND THE INVESTIGATED REALITY

Our research is set within the dynamic smartphone sector, a vital part of the global consumer electronics market. This industry has experienced exponential growth since the early 2000s, with subscriber numbers reaching over 5.6 billion in 2023 and projected to increase to approximately 6.4 billion by 2029 (Statista, 2023). In 2023 alone, mobile technologies and services contributed 5.4% to the global GDP, amounting to $5.7 trillion and supporting around 35 million jobs (Pew Research Center, 2022). With annual sales in the hundreds of millions, smartphones have become indispensable, offering advanced computing capabilities and connectivity.

The industry’s origins trace back to the late 1980s with basic mobile phones, experiencing a significant evolution with IBM’s introduction of Simon in 1994, the first device featuring additional capabilities like email and a touchscreen (Time Magazine, 2014; World Economic Forum, 2018). A major transformation occurred in 2007 with Apple’s launch of the iPhone, which redefined mobile technology and user expectations. This evolution has continued with substantial advancements in 5G and artificial intelligence, profoundly affecting user interaction and service delivery across diverse sectors, including finance, healthcare, and education (Latif et al., 2017).

This sector is characterized by high consumer involvement, where preferences are greatly influenced by brand loyalty, technological innovation, and value for money, driving purchasing decisions across various demographic segments (Chen et al., 2016). Additionally, the industry is shaped by multiple internal factors such as financial strength and profitability, competitive strength and market standing, innovation and R&D investment, supply chain management, brand loyalty, and consumer perception. External factors also play a crucial role, including competition from rival sellers and new entrants, the bargaining power of suppliers and customers, and various regulatory and geopolitical challenges that influence market conditions (Liu, 2024). The dynamic and expanding nature of the smartphone market further underscores the importance of analyzing customer reviews, offering vital insights into consumer behavior and enhancing market responsiveness.

DIAGNOSIS OF THE PROBLEM SITUATION AND/OR OPPORTUNITY

Our research harnesses the rapid growth of electronic word-of-mouth through online consumer reviews, which are widely accessible and significantly impact purchasing decisions. These reviews have become a crucial element of consumer behavior in the digital age (Hennig-Thurau et al., 2004). The primary problem addressed by our study is the significant underutilization of this valuable data, which remains largely unexplored due to its unstructured nature and the linguistic complexity involved. This research proposes an advanced methodological framework that enhances the depth of analysis possible from such data, significantly improving upon traditional sentiment analysis techniques by employing state-of-the-art NLP tools, including BERT and sentence transformers (Devlin et al., 2018).

A multidisciplinary approach is used in this study, integrating three disciplines: (1) Computational Linguistics: to apply robust sentiment analysis and topic modeling, effectively decoding complex consumer feedback; (2) Data Science: utilizing cutting-edge machine learning algorithms to manage large datasets and uncover actionable insights; (3) Consumer Behavior and Marketing: incorporating theories from these disciplines to contextualize sentiment analysis results within consumer behavior patterns and strategic marketing frameworks (Kotler & Keller, 2016).

In our study, we employed a triangulation method to enhance the validity and depth of our analysis by integrating both quantitative and qualitative data sources. We utilized sentiment analysis on large datasets from Kaggle, specifically targeting reviews for OnePlus, Apple iPhone, and K8 Mobile. Complementarily, we conducted topic modeling using the sentence transformer model to capture qualitative insights into underlying themes within the reviews. This method of merging quantitative and qualitative data allowed us to cross-validate findings and gain a more holistic view of consumer opinions, significantly enhancing the reliability and richness of our analysis.

Both techniques, sentiment analysis and topic modeling, are explained below, along with an overview of related work.

SENTIMENTAL ANALYSIS

Sentiment analysis, also known as opinion mining, is a prominent natural language processing (NLP) subfield that focuses on recognizing and classifying people’s opinions, sentiments, evaluations, attitudes, and emotions toward a variety of entities, including people, products, and services. The method, which has received a lot of attention lately, makes use of the enormous volumes of data produced by online interactions to obtain crucial insights into several fields, such as political science, public opinion analysis, and marketing. The primary goal is to extract insights that inform decision-making processes, with substantial applications across commercial, social, and political fields. This approach not only facilitates understanding of consumer behavior but also enhances interaction strategies for businesses and policymakers (Feldman, 2013; Li, 2011; Neviarouskaya et al., 2011; Shayaa et al., 2018).

The field began to take shape with the growth of the internet, enabling the analysis of vast user-generated content. Early methods were primarily concerned with detecting polarity in texts - whether an opinion was positive, negative, or neutral. Pioneering studies by Turney (2002) and Pang et al. (2002) were among the first to apply machine learning techniques to this task. Turney (2002) used unsupervised methods to assess reviews based on semantic orientation, while Pang and Lee (2008) utilized supervised learning to classify the sentiment of movie reviews, demonstrating the potential of machine learning over manual rule-based systems.

As the field evolved, sentiment analysis methodologies grew more sophisticated, incorporating advanced machine learning and deep learning techniques. Anvar Shathik and Krishna Prasad (2020) noted the use of these methods to detect sentiment polarity at various levels of text, such as documents, paragraphs, and sentences. The integration of deep learning has been particularly transformative, with recent methodologies employing complex word embedding techniques like word2vec, GloVe, and BERT to enhance the understanding of linguistic patterns in large datasets (Devi et al., 2024).

The literature identifies several specific types of sentiment analysis, each serving different analytical purposes. Document-level analysis assesses the overall sentiment of a text, while sentence-level analysis breaks down sentiment by individual sentences. Aspect-based sentiment analysis, another important type, focuses on understanding sentiments related to specific attributes of entities, providing nuanced insights that are especially useful in customer feedback analysis (Shayaa et al., 2018; Sivakumar & Uyyala, 2021). Comparative sentiment analysis, meanwhile, looks at sentiments across multiple entities to gauge relative opinions.

The applications of sentiment analysis are diverse and impactful. In business, it helps companies monitor brand and product sentiment through customer feedback, enhancing market analysis and customer service. In social and political realms, sentiment analysis aids in gauging public opinion on various issues, thereby informing policy and election strategies (Zhang et al., 2018). The theoretical underpinnings of sentiment analysis, deeply rooted in linguistics, psychology, and computational sciences, have enabled these applications, reflecting the complex interplay of machine learning, language, and human emotion in modern data analytics.

Sentimental analysis related work

In the burgeoning field of sentiment analysis, various studies have integrated advanced machine learning techniques to enhance precision and applicability across diverse datasets. Chen et al. (2017) introduced a sentiment analysis framework employing a multiple-attention mechanism with bidirectional LSTM (long short-term memory), tailored to analyze sentiments toward specific targets within texts. This model demonstrates significant accuracy improvements by effectively handling distant sentiment features and filtering out irrelevant information. Similarly, Gondhi et al. (2022) utilized LSTM networks combined with word2vec embeddings to analyze sentiments in Amazon e-commerce reviews, highlighting the model’s superior performance over traditional baselines.

Deep learning techniques have also been widely adopted to address the complexities of sentiment analysis. Zheng et al. (2018) provided a comprehensive review of various deep learning architectures used in sentiment analysis, including neural networks, CNNs (convolutional neural networks), RNNs (recurrent neural networks), LSTMs, and attention mechanisms. These technologies are noted for their ability to handle long-range dependencies in texts and incorporate nuanced details such as user and product information. In parallel, Kaur and Sharma (2023) and Sivakumar and Uyyala (2021) explored innovative deep-learning approaches using LSTM networks for analyzing consumer sentiments, demonstrating how hybrid feature vectors and fuzzy logic can enhance model accuracy and address issues such as sarcasm and ambiguity. Yada and Vishwakarma (2020) focused on various deep learning models for sentiment analysis, noting their effectiveness in processing textual sentiment data and overcoming challenges posed by traditional methods.

Further refining sentiment analysis, Yang and Cardie (2014) enhanced conditional random field (CRF) models with context-aware constraints to improve sentiment interpretation at the sentence level. This method, which leverages both local and global contexts, showed superior performance in various settings. On the practical application front, Gregory et al. (2006) and Micu et al. (2017) explored sentiment analysis within specific domains such as document sentiment exploration and restaurant reviews, respectively. The former utilized lexical sentiment scoring combined with visual analytics tools to offer detailed insights into document collections, while the latter assessed how geographic factors influence restaurant ratings through sentiment analysis.

In terms of summarizing and making actionable decisions from sentiment analysis, Kumar and Parimala (2020)presented methodologies for summarizing customer reviews and integrating sentiment analysis with decision-making frameworks. Hu and Liu’s (2004) feature-based summarization system is particularly notable for its high accuracy in feature extraction and sentiment classification, offering valuable insights for both consumers and manufacturers. Huang et al. (2016) also focused on fine-grained sentiment analysis of Chinese online mobile phone reviews, analyzing individual product features to provide detailed consumer insights.

Lastly, the scalability and adaptability of sentiment analysis techniques are evidenced in studies by Salem and Maghari (2020) and Sharm et al. (2022), who evaluated various machine learning classifiers and deep learning models for their effectiveness in large-scale sentiment categorization. Prabowo and Thelwall (2009) developed a comprehensive strategy that combines rule-based classification, machine learning, and unsupervised learning techniques, aiming to robustly analyze sentiments across diverse textual sources. Wang et al. (2017) integrated sentiment analysis with econometric models to assess how sentiments expressed in online reviews can influence consumer purchase decisions, highlighting the varying importance of product aspects.

TOPIC MODELING

Topic modeling is a transformational statistical technique used in natural language processing to organize and comprehend vast amounts of textual data by identifying latent topics within them. Its evolution began in the late 1980s with the development of latent semantic analysis (LSA) (Deerwester et al., 1990). LSA detected patterns in word usage across documents by decomposing large text matrices, laying the groundwork for discovering underlying semantic structures in texts (Vayansky & Kumar, 2020). Building on LSA, Hofmann (1999) proposed probabilistic latent semantic analysis (PLSA), which increased the robustness of topic detection by adopting a probabilistic approach, considerably enhancing the interpretability of the results (Zhang et al., 2023).

A major advancement in topic modeling occurred with the introduction of latent Dirichlet allocation (LDA) by Jeong et al. (2019), which marked a transformative moment in the field. LDA, a generative probabilistic model, revolutionized topic modeling by representing documents as mixtures of hidden topics, with each topic characterized by a specific word distribution. This model quickly became the standard due to its flexibility and ability to produce meaningful and interpretable results (Bastani et al., 2019).

Since LDA’s introduction, numerous methodological advancements have followed. Dynamic topic models (DTM) were developed to analyze how topics evolve over time, allowing researchers to track changes within a corpus across different periods. The correlated topic model (CTM) further extended LDA’s capabilities by accounting for correlations between topics, enabling the discovery and representation of complex relationships among different topics (Xun et al., 2017). These models have been adapted to handle larger datasets and have expanded beyond textual data to applications involving images and genetic information, illustrating the broad applicability and versatility of topic modeling techniques.

The applicability of topic modeling is broad, with substantial applications in academic research, digital marketing, journalism, and public health. In academia, it is used to identify trends and structural themes in scientific publications, allowing for a better understanding of research dynamics. Marketers use topic models to sift through consumer feedback and uncover common sentiments and preferences that guide strategic decisions. Media organizations employ these models to classify and recommend content, enhancing user engagement through personalized content distribution. In public health, topic models analyze patient records and medical reports to find common symptoms and treatments that aid in disease understanding and management (Blei et al., 2003; DiMaggio et al., 2013; Gregoriades et al., 2021; Paul & Dredze, 2011). A notable advancement in the field is the transformer model known as all-MiniLM-L6-v2. This model significantly improves topic modeling by generating high-quality sentence embeddings that capture subtle semantic meanings more effectively than traditional word-level embeddings. Developed by Reimers and Gurevych (2019), the all-MiniLM-L6-v2 employs a fine-tuned BERT architecture to produce embeddings that maintain semantic coherence across various texts. This methodological enhancement is crucial for addressing the complexities of language in large datasets, thereby boosting the accuracy and granularity of topic detection and analysis. The integration of advanced embedding techniques such as sentence transformers into topic modeling frameworks represents a significant advancement in the discipline, opening up new avenues for extracting deeper, more precise insights from textual data. This development not only improves existing models but also broadens their applicability, ensuring that topic modeling remains an important and dynamic tool in data-driven research (Devlin et al., 2018).

Topic modeling analysis related work

Recent advancements in topic modeling have significantly improved the nuanced extraction and analysis of consumer opinions from online reviews, utilizing various novel machine learning and deep learning approaches.

Bagheri et al. (2014) introduced ADM-LDA, an advanced unsupervised model based on latent Dirichlet allocation (LDA), which enhances aspect detection accuracy by considering sentence structure, word order, and semantic relationships. This method shows substantial improvements over standard LDA in extracting more coherent and informative aspects from review sentences. Similarly, Zhai et al. (2011) developed constrained LDA, a semi-supervised approach that significantly outperforms traditional LDA by utilizing must-link and cannot-link constraints to more effectively group product features in opinion mining. Anoop and Asharaf (2019) further explored the application of LDA for aspect-oriented sentiment analysis of e-commerce product reviews, mapping extracted topics to various product aspects to enhance the granularity of sentiment analysis. Yiran and Srivastava (2019) also employed LDA in their ontology framework to extract key aspects such as screen, camera, and battery life from user-generated reviews on Amazon, specifically analyzing iPhone X reviews.

Other studies have integrated complex models to analyze consumer behavior and sentiment. Li and Ma (2020) combined topic modeling with hidden Markov models to better understand consumer search phrases and their links to website visits and purchases, offering insights for targeted marketing strategies. Ramshankar and Prathap (2023) introduced an enhanced sentiment analysis model using a heuristic-based CNN-BiLSTM, leveraging the improved galactic swarm optimization algorithm to significantly boost prediction accuracy in e-commerce datasets. Further innovations include An et al. (2023), who employed clustering algorithms and transformer-based models for trend forecasting and product analysis in e-commerce reviews, demonstrating the effectiveness of these models in deriving actionable marketing insights. Jeong et al. (2019) utilized topic modeling and sentiment analysis on social media data to identify smartphone product development opportunities, specifically examining customer satisfaction and areas for improvement in the Samsung Galaxy Note 5. For product review and market analysis, K. Chen et al. (2015) integrated topic modeling with TOPSIS and multi-dimensional scaling (MDS) to extract product features from reviews, rank products, and visualize the competitive landscape, showing how consumer preferences evolve. F. Wang et al. (2019) studied consumer behavior differences on domestic versus overseas e-commerce platforms, using the term topic model (BTM) to highlight how differing consumer priorities affect product ratings.

On the summarization front, Li et al. (2020)presented a novel summarization method that combines review texts with summary analysis, using LDA to identify key topics and sentiments, further refining content through style classification between summaries and reviews. Mahadevan and Arock (2020) introduced the sentiment enriched and LDA-based review rating prediction model (SELDAP), which integrates LDA with VADER sentiment analysis to enhance accuracy in predicting review ratings. In the realm of filtering and refining data, Joung and Kim (2021) refined LDA analysis of online reviews by filtering out irrelevant keywords using product manuals, thereby enhancing the precision of product attribute identification. Zhan et al. (2009) proposed an automatic method to summarize customer concerns from online product reviews by identifying and ranking key topics, which aids firms in understanding and addressing consumer issues effectively.

These studies collectively highlight the dynamic advancements in sentiment analysis and topic modeling techniques, showcasing a variety of approaches that extend from enhancing basic models to incorporating sophisticated systems that combine deep learning and hybrid methodologies.

This research focuses on sentiment classification and topic modeling for mobile phone reviews using advanced natural language processing techniques. Data from the OnePlus, Apple iPhone, and K8 Mobile datasets on Kaggle are utilized. For sentiment classification, we employ the BERT (bidirectional encoder representations from transformers) model to capture the context of words within the text. The model architecture consists of input layers for tokenized text, a pre-trained TFBertModel for processing, and additional dense layers for classification. It is fine-tuned specifically to classify sentiments as positive, neutral, or negative based on the review text. For topic modeling, this study uses the sentence transformer model, specifically the all-MiniLM-L6-v2 variant, to generate sentence embeddings, which are then clustered using the k-means algorithm to identify and interpret different topics within the reviews.

ANALYSIS OF THE PROBLEM SITUATION AND PROPOSALS FOR INNOVATION/INTERVENTION/RECOMMENDATION

Table 1 presents the alternative research methodologies applied to smartphone reviews in previous studies to address the problem mentioned in the introduction of this study, along with their findings.

Table 1
Summary of sentiment analysis techniques and findings from selected studies.

METHODOLOGY APPLIED IN THIS RESEARCH

The computational setup for our analysis included a hardware configuration featuring an Intel Core i7-10750H CPU, an NVIDIA GeForce RTX 2060 GPU with 6 GB of VRAM, and 32 GB of DDR4 RAM. The software environment was based on Windows 10 and utilized Python 3.8, managed and executed within Jupyter Notebook. This platform was chosen for its interactive nature, allowing for seamless integration of code execution, data visualization, and narrative explanation in a single document.

Key libraries and frameworks employed in the analysis included core libraries such as Hugging Face Transformers, Sentence Transformers, scikit-learn, TensorFlow 2.4.1, and KMeans for clustering, along with data analysis libraries like Pandas, NumPy, Matplotlib, and Seaborn for statistical analysis and visualization. Additionally, the NLTK library was used for text processing, and time was employed to measure the duration of various computational tasks. This comprehensive setup provided the necessary computational power and flexibility to effectively conduct our sentiment analysis and topic modeling tasks.

The computational analysis process involved several stages, each requiring substantial time to ensure accuracy and reliability:

(1) Data Preprocessing:

  • this step involved extensive text cleaning, tokenization, normalization, and the transformation of categorical data, which took approximately '15 hours'. The preprocessing stage ensured the dataset was structured appropriately for downstream analysis.

(2) Sentiment Analysis:

  • Model Selection and Training: we employed the BERT model using Hugging Face Transformers for sentiment classification. This phase required around '30 hours', as it included fine-tuning the model on our dataset and adjusting hyperparameters to optimize performance.

  • Evaluation and Testing: the model was evaluated on a validation set, requiring '5 hours' to ensure accuracy and prevent overfitting. Testing on a separate test set further validated the model’s robustness.

  • Execution on Full Dataset: applying the trained model to the complete dataset took an additional '5 hours'.

(3) Topic Modeling:

  • Sentence Embedding Generation: we used the sentence transformer model (all-MiniLM-L6-v2) to generate sentence embeddings. This step took about '6 hours' to encode all reviews into meaningful embeddings.

  • Clustering with k-means: the embeddings were then clustered into topics using the k-means algorithm, which took approximately '4 hours'.

  • Topic Interpretation and Evaluation: finally, interpreting the topics and evaluating the coherence of clusters required '3 hours'.

In total, the entire process took approximately '68 hours'. This comprehensive analysis highlights the substantial computational resources and time required, demonstrating a thorough approach that should satisfy the reviewer’s concerns.

The BERT model and sentence transformer

This section describes the model chosen for the sentiment classification and topic modeling tasks.

For sentiment classification, we selected the BERT (bidirectional encoder representations from transformers) model due to its superior ability to understand the context of words in textual data. BERT’s effectiveness in sentiment analysis makes it an ideal choice for our study, given its capacity to discern the nuanced shifts in meaning that context imparts to words.

BERT stands out in natural language processing (NLP) due to its attention mechanism, which enables it to focus on different parts of the text, mirroring human reading comprehension. Unlike traditional linear models, its bidirectional approach allows for a comprehensive understanding of word context from both preceding and following text. Pre-training on a vast corpus equips BERT with an intricate understanding of language subtleties, enhancing its performance even before fine-tuning for specific tasks like sentiment classification. Pre-training a language model like BERT involves training the model on a large dataset to learn language patterns and structures. This process allows the model to quickly adapt to and excel in new tasks with comparatively less task-specific data, making it particularly effective for sentiment classification.

BERT is based on the transformer architecture, which relies on mechanisms to process input text. The key mathematical components of BERT related to the scope of this research include:

Self-Attention Mechanism: BERT uses the self-attention mechanism (Vaswani et al., 2017) to weigh the importance of different words in a sentence relative to each other. This means that every word can pay attention to other words, helping it understand the context better. This is mathematically represented as:

Attention ( Q , K V ) = s o f t m a x ( QK T d k ) V

where Q (query), K (key), and V (value) are matrices derived from the input embeddings, and dk is the dimension of the key vectors. The dot product of Q and K is scaled by dk and passed through a softmax function to obtain the attention weights, which are then multiplied by V (Vaswani et al., 2017).

Self-attention plays a key role in understanding and capturing contextual relationships and long-range dependencies in text data. This makes it highly valuable for tasks like sentiment analysis and topic modeling, where context and relationships between words significantly impact the results (Vaswani et al., 2017).

Feed-Forward Network: each layer in BERT includes a position-wise feed-forward network, applied to each position separately and identically:

F F N ( x ) = max ( 0, x W 1 + b 1 ) W 2 + b 2

where W 1,W 2 are weight matrices, and b 1,b 2 are biases (Vaswani et al., 2017).

We utilize the base version of BERT, optimized for our study’s needs, emphasizing computational efficiency while maintaining adaptability for sentiment classification tasks. This BERT variant retains essential features like the attention mechanism and bidirectional context capture but is streamlined for greater speed and efficiency. Its proven effectiveness in various NLP domains and open-source availability justifies our choice, promoting broader experimentation and innovation.

For topic modeling, we employ the sentence transformer model, built upon the same foundation as BERT but specifically designed to capture the essence of sentences rather than individual words. The model operates under a Siamese and triplet network structure that optimizes sentence embeddings to be semantically meaningful and comparable in the vector space.

The all-MiniLM-L6-v2 variant of the sentence transformer that we employ is a distilled version of a larger model, maintaining a balance between performance and efficiency. It retains the core characteristics of BERT, including its ability to learn contextually enriched word representations. Still, it is specifically fine-tuned to produce sentence embeddings more suitable for comparing textual similarity at the sentence level. This fine-tuning involves training on pairs or triplets of sentences to pull semantically similar sentences closer in the embedding space while pushing dissimilar ones apart. The model’s capacity to generate embeddings that capture deep semantic meanings is particularly advantageous for clustering algorithms, which rely on these nuanced representations to group sentences into coherent topics.

By using both BERT for sentiment analysis and the sentence transformer for topic modeling, our deep-learning model leverages the strengths of transformer-based architectures to deliver state-of-the-art performance in understanding and classifying textual data. The synergy of these models demonstrates the effectiveness of transformer-based models in a wide range of NLP tasks.

The dataset and data preprocessing steps are explained in the next section.

DATASETS

Our research proposes sentiment analysis and topic modeling approaches in the context of mobile phones, requiring corresponding data that captures public sentiments for various mobile phone brands. After thorough research, we selected the OnePlus, Apple iPhone, and K8 Mobile datasets from Kaggle. These datasets were chosen for the following reasons: (1) they are open source and freely available, and (2) existing studies have utilized these datasets for similar tasks, making it easier to benchmark our results. The OnePlus, Apple iPhone, and K8 datasets contain 30,612, 5,010, and 14,675 records, respectively. Notably, the dataset is highly imbalanced.

We intentionally utilized review datasets from three distinct mobile brands - K8, iPhone, and OnePlus - to capture a broad and authentic representation of customer evaluations across different products. By integrating reviews from multiple sources, we aimed to reflect real-world sentiments and maintain the natural distribution of opinions present in the market. Despite efforts to balance the datasets by incorporating reviews from various brands, the prevalence of positive reviews remains higher than that of negative reviews. This outcome aligns with industry observations where positive reviews often outnumber negative ones. Our approach ensures that the data mirrors actual consumer behavior and sentiment, providing a realistic basis for analysis.

The primary objective of our study is to present a true picture of customer feedback without artificially altering the dataset. Introducing artificial balance through techniques like oversampling or undersampling could distort the genuine nature of customer sentiments. Our methodology is designed to offer insights based on unaltered, real-world data, which we believe is crucial for accurate analysis and interpretation. We acknowledge that imbalances in the dataset can impact model performance. However, our evaluation includes accuracy metrics (Table 2) that go beyond overall accuracy. For instance, while the model demonstrated high accuracy in detecting positive sentiments across datasets, it showed varied performance in identifying negative and neutral sentiments. This comprehensive evaluation approach aims to provide a nuanced understanding of the model’s performance across different sentiment classes and address the potential impact of data imbalance on our findings.

Table 2
Sentiment classification.

In the OnePlus and Apple iPhone datasets, ratings are initially presented as strings, such as ‘4.0 out of 5.’ We transform these into numerical values, for example, turning ‘4.0 out of 5’ into 4.0. Then, we categorize these ratings: 4.0 and 5.0 are labeled as positive, 3.0 as neutral, and ratings below 3.0 are considered negative. The K8 dataset uses binary values for its output labels. In our preprocessing, we interpret 1s as positive and 0s as negative. This step prepares the data for sentiment analysis and topic modeling, aligning it with the requirements of these tasks.

We merged the three datasets for topic modeling to create an overarching topic modeling model that considers sentiments for different mobile phones. Figure 1 shows the distribution of output labels for the OnePlus, Apple iPhone, and K8 datasets. It is evident that the dataset is highly imbalanced for the OnePlus and Apple iPhone datasets, with the positive class overwhelmingly outweighing the negative and neutral classes. The dataset for K8 reviews is more balanced, with the positive class slightly dominating the negative class.

Figure 1
Data distribution for all dataset.

Figure 1 illustrates the distribution of labels for the combined OnePlus, Apple iPhone, and K8 datasets. The figure reveals that we have a highly imbalanced dataset for topic modeling and sentiment classification. This reflects a general public tendency to refrain from expressing negative sentiments toward major mobile phone brands.

Model Architecture

We developed two separate model architectures for sentiment classification and topic modeling tasks.

Model for sentiment classification

Figure 2 depicts the model architecture for the sentiment classification task. The model is built using TensorFlow’s Keras API and is tailored to process input data in a structured format suitable for natural language processing tasks.

Figure 2
Model Architecture for Sentiment Classification.

The model begins with an input layer designed to handle two distinct sequences: 'input_ids' and 'attention_masks', each with a shape corresponding to 'max_length, signifying the maximum sequence length BERT can process. These inputs are integers representing the tokenized form of the text data and the attention mask values that indicate whether a particular token should be attended to or not. Following the input layer, the model utilizes the pre-trained 'TFBertModel' from Hugging Face’s transformers library. This BERT model processes the 'input_ids' and 'attention_masks', outputting a sequence of hidden states for each token in the input. The BERT model used here is the 'bert-base-uncased', which consists of 12 layers (transformer blocks), a hidden size of 768 units, and 12 self-attention heads.

We focus on the first token’s last hidden state for classification purposes, commonly called the [CLS] token. This token’s output is believed to encapsulate the aggregate meaning of the input sequence and is, therefore, used for classification tasks. It is extracted using a 'SlicingOpLambda' layer that selects the [CLS] vector from the sequence output. After extracting the [CLS] token, the architecture includes two additional dense layers with 512 and 256 neurons, respectively. These values are selected based on best practices for fine-tuning pre-trained BERT models. Each dense layer uses the ReLU (rectified linear unit) activation function, providing non-linearity to the model. Between these two dense layers, a dropout layer with a dropout rate of 0.1 is introduced to prevent overfitting by randomly setting input units to 0 at each step during training. The final dense layer acts as the output layer with a size equivalent to the number of classes in the classification task. The softmax activation function is applied to this layer to obtain the probability distribution over the class labels.

In summary, the presented architecture builds upon the powerful pre-trained BERT model, adding custom dense layers to tailor it to a specific classification task. The architecture is designed to handle variable-length text input and provide a probability distribution over possible output classes, making it suitable for various NLP classification tasks.

EXPERIMENTS AND RESULTS

Sentiment classification

The sentiment classification model is compiled with a categorical cross-entropy loss function, which is suitable for multi-class classification tasks. The Adam optimizer, with a learning rate of 2e-5, minimizes the loss function. This learning rate is often chosen for fine-tuning BERT models as it is small enough to make subtle updates to the pre-trained weights.

We divide the dataset into training, validation, and test sets with ratios of 80:10:10. The model training commences for ten epochs with a batch size of 16. The model fits the training data, comprising the input_ids and 'attention_mask' from 'train_encodings' and the corresponding train labels. Simultaneously, it validates on a separate validation set to monitor performance on data not seen during the training phase. Throughout the training epochs, the 'ModelCheckpoint' callback acts as a performance gatekeeper, ensuring that only the most accurate version of the model on the validation data is preserved. This approach not only streamlines the model selection process but also safeguards against overfitting by halting the model’s saving when the validation accuracy ceases to improve. Upon completion of the training epochs, the best model saved is loaded for further evaluation on a test set.

Table 2 shows the model accuracies for sentiment classification on the OnePlus, Apple iPhone, and K8 datasets, as well as the overall results.

For the OnePlus dataset, the model demonstrated high accuracy in identifying positive sentiments, with a score of 95.21%. However, it showed a limited ability to detect negative sentiments accurately, achieving only an 18.19% score. The performance on neutral sentiments was moderate, at 63.12%. The overall accuracy for this dataset was 80.12%.

The Apple iPhone dataset results were exceptional regarding positive sentiment detection, with a near-perfect score of 99.14%. Negative sentiment detection was also robust, with a 73.12% score. Notably, the model failed to identify any neutral sentiments, resulting in a 0.00% performance in this category. Despite this, the overall accuracy was high at 92.87%.

Regarding the Sonny K8 dataset, the model had a lower positive sentiment detection rate at 90.87% compared to the OnePlus and Apple iPhone datasets. Negative sentiment detection was strong, at 88.24%. The model’s performance on neutral sentiments could not be evaluated due to missing data (indicated by ‘____’). The overall accuracy for this dataset was recorded at 92.34%.

Combined dataset performance

When evaluating the combined dataset, which integrated data from all three sources, the model achieved a positive sentiment detection rate of 94.21%. Its performance on negative sentiments was also high at 79.85%. However, similar to the individual datasets, the model struggled with neutral sentiment detection, scoring only 7.12%. The overall accuracy across the combined dataset was 83.32%.

Analysis of results

The disparity in model performance across categories and datasets can be attributed to several factors. Firstly, each dataset exhibited varying degrees of class imbalance, with the positive category dominating all datasets. This class imbalance likely contributed to the model’s proficiency in identifying positive sentiments, as it had more examples to learn from. Secondly, the abundance of positive examples in the training data may have led the model to develop a bias toward predicting positive sentiments. This is evident from the high detection rates for positive sentiments. Thirdly, the consistently lower scores for neutral sentiment detection across datasets suggest that neutral sentiments are inherently more challenging for the model to detect. This difficulty could be due to their less distinctive linguistic features compared to positive or negative expressions. Lastly, the Apple iPhone dataset’s failure to detect neutral sentiments may indicate an overfitting issue, where the model overly tailored its predictions to the dominant classes in the dataset.

The results show that while our model excels at detecting positive sentiments, there is a significant need to improve its ability to identify negative and neutral sentiments, particularly the latter. The class imbalance in the training data has likely introduced a bias that needs to be addressed. This could be done through techniques such as oversampling the minority classes or applying class-weight adjustments during training.

Moreover, the model’s varying performance across datasets underscores the importance of diverse and balanced training data for developing robust sentiment analysis models. Future work should focus on enhancing the model’s sensitivity to underrepresented classes and improving its ability to generalize across different data sources to boost overall accuracy and reliability in sentiment classification tasks.

Topic modeling

The cornerstone of our topic modeling approach is the sentence transformer, specifically the all-MiniLM-L6-v2 variant. This compact but powerful transformer-based model is optimized for generating sentence-level embeddings. We initialize the model to encode batches of texts, transforming the varied-length textual data into fixed-size vectors that capture the semantic nuances of each sentence.

The generated embeddings are used as input for the k-means clustering algorithm, which partitions the embeddings into a specified number of clusters, each representing a potential topic within the data; k-means is favored for its simplicity and effectiveness in grouping data points based on feature similarity. We set the number of clusters to 7, aiming to balance granularity with broad thematic coverage. The selection of seven clusters was determined through a rigorous evaluation process. Various cluster counts were assessed using silhouette scores, which measure how similar an object is to its own cluster compared to other clusters. The seven-cluster configuration consistently achieved the highest silhouette scores, indicating optimal cluster coherence and separation. This choice was further validated by repeated k-means iterations with different random seeds, ensuring stability and robustness in our clustering approach.

For each sentiment category in our dataset (positive, negative, and neutral [Figures 3, 4, and 5), we conducted k-means clustering on the sentence embeddings. This process assigns cluster labels and determines cluster centers for each category, effectively grouping the reviews into discernible topics. The most critical stage in topic modeling is interpreting the clusters. For each cluster center, we calculate the Euclidean distance between the center and all embeddings within the cluster, identifying the closest embeddings to the center. These closest embeddings are presumed to be the most representative of the cluster’s topic. We qualitatively interpret the topics by retrieving and examining the corresponding texts.

Figure 3
Word cloud for positive reviews.

Figure 4
Word cloud for negative reviews.

Figure 5
Word cloud for neutral reviews.

The next section presents the results obtained from our experiments. Before applying the sentence technique, we plotted word clouds for the positive, negative, and neutral sentiment categories to visualize which words appear most frequently in each category.

Next, we performed topic modeling using the sentence former technique, which is explained in the methodology section.

The initial step in our process was to transform text data into embeddings. Bypassing the text through the sentence transformer, we generated dense vector representations for each review within our dataset, which inherently reflected the text’s semantic properties. With embeddings in hand, we applied the k-means clustering algorithm with a predefined number of clusters set to 7. This unsupervised learning method grouped the embeddings into clusters based on their similarity, with the hypothesis that each cluster would correspond to a unique topic present in the data.

To make sense of these clusters, we analyzed the centroids of each cluster, identifying the closest text embeddings to these central points. These closest instances were then qualitatively examined to infer the underlying topics, providing a narrative for each cluster formed.

These are the results; given in the Table 3:

Table 3
Topic modeling of consumer reviews on mobile phones.

Topic interpretation:

  • The positive category predominantly reflects customer satisfaction, highlighting performance, value, user experience, and specific product features like the camera and battery. There is a general trend of approval within this category, with some minor reservations about specific features like battery life and camera quality.

  • The negative category topics reveal significant customer concerns, including complaints about camera and battery performance, hardware and display issues, and general dissatisfaction with product quality and functionality. Recurring themes such as poor camera quality and battery life criticism highlight critical areas for product improvement.

  • The neutral category offers a balanced perspective, covering a range of topics from performance issues and price considerations to software bugs and camera quality. The consistent mention of poor battery life across this category suggests an area of concern that may require particular attention from manufacturers.

Gains from the innovations

  • For the organization: our proposals enable smartphone companies to directly translate consumer sentiments into actionable insights for product development and marketing. This alignment can lead to improved product offerings and higher consumer satisfaction.

  • For internal and external stakeholders: the improved responsiveness to consumer needs fosters greater consumer trust and loyalty, benefiting internal stakeholders like marketing and product development teams. Externally, consumers enjoy products that better meet their expectations and needs, enhancing their overall experience and satisfaction with the brand.

CONCLUSIONS AND TECHNOLOGICAL/SOCIAL CONTRIBUTION

The findings of this research offer several significant implications across theoretical, practical, and methodological dimensions. By employing advanced NLP techniques such as BERT and sentence transformers, this study not only enhances the understanding of consumer sentiments but also provides a robust framework for analyzing complex textual data. These implications are crucial for both the academic community and industry professionals, particularly within the highly competitive smartphone market.

This research advances the theoretical understanding of sentiment analysis and topic modeling by integrating state-of-the-art models like BERT and sentence transformers. Unlike traditional methods that often fail to capture nuanced meanings in consumer feedback, this approach provides a deeper, context-aware analysis. The bidirectional nature of BERT allows for a more comprehensive understanding of sentiments, addressing issues such as sarcasm and cultural nuances that are often overlooked in previous studies. This contributes to the existing body of knowledge by demonstrating the effectiveness of deep learning models in handling complex linguistic patterns in consumer reviews.

Furthermore, the study’s findings highlight the dynamic interplay between consumer sentiments and brand perception, providing a more granular understanding of how specific smartphone features influence customer satisfaction. This adds to the growing literature on consumer behavior, offering insights that can be applied to other sectors where consumer feedback plays a critical role.

The practical implications of this research are particularly relevant for smartphone manufacturers and marketers. By uncovering distinct consumer sentiment patterns and thematic preferences, the study provides actionable insights that can inform product development and marketing strategies. For instance, identifying strong positive sentiments associated with specific features like performance and user experience suggests that manufacturers should continue to emphasize these aspects in their product designs and advertising campaigns.

Additionally, the findings on areas of concern, such as camera and battery performance, offer manufacturers clear targets for improvement. By addressing these issues, companies can enhance customer satisfaction, reduce negative reviews, and ultimately strengthen their market position. The ability to predict consumer responses to potential product changes or new releases based on sentiment trends further supports strategic decision-making, allowing companies to remain agile and responsive to market demands.

Beyond the smartphone industry, the methodologies applied in this study can be adapted for use in other industries that rely heavily on consumer feedback, such as e-commerce, hospitality, and technology. This cross-industry applicability highlights the versatility and relevance of the research.

This study makes several methodological contributions by demonstrating the effectiveness of combining sentiment analysis with topic modeling using advanced NLP techniques. The dual-model framework, leveraging BERT for sentiment analysis and sentence transformers for topic modeling, showcases how these models can be used together to provide a more holistic view of consumer feedback. This approach addresses limitations in previous research, such as the inability to capture the full context of sentiments or to identify multiple themes within a single review.

The integration of these models also offers a scalable solution for analyzing large datasets, making it a valuable tool for companies dealing with vast amounts of unstructured data. Future research can build on this framework by exploring its application in different languages or cultural contexts, further validating its utility across diverse markets.

From a social perspective, this study enriches the consumer decision-making process by providing insights derived from a comprehensive analysis of user-generated content. By moving beyond simple quantitative metrics such as star ratings, the research helps paint a more detailed picture of consumer satisfaction and expectations (Kumar & Parimala, 2020). This not only aids consumers in making more informed choices but also pressures manufacturers to uphold and enhance product quality.

The broader impact of this research extends to the overall consumer experience and market dynamics within the smartphone industry. By providing manufacturers with the tools to better understand and respond to consumer needs, this study contributes to the creation of products that are more closely aligned with customer expectations. This alignment not only enhances customer satisfaction but also fosters brand loyalty, which is essential in a highly competitive market.

Moreover, the insights gained from this research can help companies navigate the complex landscape of online reviews, where a single negative comment can significantly influence brand perception. By proactively addressing potential issues identified through sentiment analysis, companies can mitigate reputational risks and maintain a positive brand image.

In conclusion, this study not only advances the theoretical framework of sentiment analysis by integrating it with empirical consumer behavior analysis (Pang & Lee, 2008) but also demonstrates the practical applications of these methodologies in real-world settings. As the digital landscape continues to evolve, the insights provided by such advanced analytical techniques will become increasingly critical in guiding the strategic decisions of businesses across industries. The ongoing development and refinement of NLP applications stand to significantly influence both academic research and commercial strategies, highlighting the importance of continuous innovation and adaptation in technology and data analysis (Zhang et al., 2018).

LIMITATIONS AND SUGGESTIONS FOR FUTURE RESEARCH

While this study provides valuable insights, there are limitations that should be acknowledged. The datasets used, though comprehensive, may not fully capture the diversity of consumer opinions across different regions or demographic groups. Future research could expand the scope by including a wider range of datasets or by applying the methodology to other product categories.

While the model is adept at detecting positive customer sentiments regarding smartphones, it has limitations in accurately analyzing negative and neutral reviews. Future improvements should focus on enhancing the model’s ability to interpret a broader range of sentiments, ensuring a more comprehensive understanding of customer feedback. Additionally, while BERT and sentence transformers have proven effective in this context, there is room for further refinement in model training and fine-tuning, particularly in handling more complex linguistic features such as irony or regional dialects. Future studies could explore these areas to enhance the robustness of sentiment analysis models.

REFERENCES

  • An, Y., Kim, D., Lee, J., Oh, H., Lee, J. S., & Jeong, D. (2023). Topic modeling-based framework for extracting marketing information from e-commerce reviews. IEEE Access, 11, 135049-135060. https://doi.org/10.1109/ACCESS.2023.3337808
    » https://doi.org/10.1109/ACCESS.2023.3337808
  • Anoop, V. S., & Asharaf, S. (2019). Aspect-oriented sentiment analysis: A topic modeling-powered approach. Journal of Intelligent Systems, 29(1), 1166-1178. https://doi.org/10.1515/jisys-2018-0299
    » https://doi.org/10.1515/jisys-2018-0299
  • Anvar Shathik, J., & Krishna Prasad, K. (2020). A literature review on application of sentiment analysis using machine learning techniques. International Journal of Applied Engineering and Management Letters, 4(2), 41-77. http://doi.org/10.5281/zenodo.3977576
    » http://doi.org/10.5281/zenodo.3977576
  • Bagheri, A., Saraee, M., & Jong, F. (2014). ADM-LDA: An aspect detection model based on topic modelling using the structure of review sentences. Journal of Information Science, 40(5), 621-636. https://doi.org/10.1177/0165551514538744
    » https://doi.org/10.1177/0165551514538744
  • Bastani, K., Namavari, H., & Shaffer, J. (2019). Latent Dirichlet allocation (LDA) for topic modeling of the CFPB consumer complaints. Expert Systems with Applications, 127, 256-271. https://doi.org/10.1016/j.eswa.2019.03.001
    » https://doi.org/10.1016/j.eswa.2019.03.001
  • Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent Dirichlet allocation. Journal of Machine Learning Research, 3, 993-1022. https://jmlr.csail.mit.edu/papers/v3/blei03a.html
    » https://jmlr.csail.mit.edu/papers/v3/blei03a.html
  • Cambria, E., Schuller, B., Xia, Y., & Havasi, C. (2013). New avenues in opinion mining and sentiment analysis. IEEE Intelligent systems, 28(2), 15-21. https://doi.org/10.1109/MIS.2013.30
    » https://doi.org/10.1109/MIS.2013.30
  • Chamlertwat, W., Bhattarakosol, P., Rungkasiri, T., & Haruechaiyasak, C. (2012). Discovering consumer insight from Twitter via sentiment analysis. Journal of Universal Computer Science, 18(8), 973-992. https://doi.org/10.3217/jucs-018-08-0973
    » https://doi.org/10.3217/jucs-018-08-0973
  • Chawla, S., Dubey, G., & Rana, A. (2017, September). Product opinion mining using sentiment analysis on smartphone reviews. In 2017 6th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions)(ICRITO), 377-383. IEEE.
  • Chen, K., Kou, G., Shang, J., & Chen, Y. (2015). Visualizing market structure through online product reviews: Integrate topic modeling, TOPSIS, and multi-dimensional scaling approaches. Electronic Commerce Research and Applications, 14(1), 58-74. https://doi.org/10.1016/j.elerap.2014.11.004
    » https://doi.org/10.1016/j.elerap.2014.11.004
  • Chen, P., Sun, Z., Bing, L., & Yang, W. (2017). Recurrent attention network on memory for aspect sentiment analysis. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 452-461.
  • Chen, Y.-S, Chen, T.-J., & Lin, C.-C. (2016). The analyses of purchasing decisions and brand loyalty for smartphone consumers. Open Journal of Social Sciences, 4(7), 108-116. http://doi.org/10.4236/jss.2016.47018
    » http://doi.org/10.4236/jss.2016.47018
  • Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T. K., & Harshman, R. (1990). Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(6), 391-407. https://doi.org/10.1002/(SICI)1097-4571(199009)41:6%3C391::AID-ASI1%3E3.0.CO;2-9
    » https://doi.org/10.1002/(SICI)1097-4571(199009)41:6%3C391::AID-ASI1%3E3.0.CO;2-9
  • Devi, N. L., Anilkumar, B., Sowjanya, A. M., & Kotagiri, S. (2024). An innovative word embedded and optimization-based hybrid artificial intelligence approach for aspect-based sentiment analysis of app and cellphone reviews. Multimedia Tools and Applications, 1-34. https://doi.org/10.1007/s11042-024-18510-7
    » https://doi.org/10.1007/s11042-024-18510-7
  • Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). BERT: Pre-training of deep bidirectional transformers for language understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 4171-4186.
  • DiMaggio, P., Nag, M., & Blei, D. (2013). Exploiting affinities between topic modeling and the sociological perspective on culture: Application to newspaper coverage of U.S. government arts fundingoetics, 41(6), 570-606. https://doi.org/10.1016/j.poetic.2013.08.004
    » https://doi.org/10.1016/j.poetic.2013.08.004
  • Feldman, R. (2013). Techniques and applications for sentiment analysis. Communications of the ACM, 56(4), 82-89. https://doi.org/10.1145/2436256.2436274
    » https://doi.org/10.1145/2436256.2436274
  • Gregoriades, A., Pantelous, A. A., & Louvieris, P. (2021). Real-time situational awareness for adaptive maritime surveillance. Expert Systems with Applications, 182, 115273. https://doi.org/10.1016/j.eswa.2021.115273
    » https://doi.org/10.1016/j.eswa.2021.115273
  • Gondhi, N. K., Sharma, E., Alharbi, A. H., Verma, R., & Shah, M. A. (2022). Efficient long short-term memory-based sentiment analysis of e-commerce reviews. Computational Intelligence and Neuroscience. https://doi.org/10.1155/2022/3464524
    » https://doi.org/10.1155/2022/3464524
  • Gregory, M., Chinchor, N., Whitney, P., Carter, R., Hetzler, E., & Turner, A. (2006). User-directed sentiment analysis: Visualizing the affective content of documents. In Proceedings of the Workshop on Sentiment and Subjectivity in Text, 23-30.
  • Hennig-Thurau, T., Gwinner, K. P., Walsh, G., & Gremler, D. D. (2004). Electronic word-of-mouth via consumer-opinion platforms: What motivates consumers to articulate themselves on the Internet? Journal of Interactive Marketing, 18(1), 38-52. https://doi.org/10.1002/dir.10073
    » https://doi.org/10.1002/dir.10073
  • Hofmann, T. (1999). Probabilistic latent semantic indexing. In Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 50-57.
  • Hu, M., & Liu, B. (2004). Mining and summarizing customer reviews. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 168-177.
  • Huang, J., Sun, H., Guo, P., Zhao, M., & Niu, K. (2016). Fine-grained sentimental tendency analysis based on Chinese online commentary of mobile phone. In Proceedings of the 2016 IEEE International Conference on Cloud Computing and Big Data Analysis (ICCCBDA), 319-323.
  • Jelodar, H., Wang, Y., Yuan, C., Feng, X., Jiang, X., Li, Y., & Zhao, L. (2019). Latent Dirichlet allocation (LDA) and topic modeling: Models, applications, a survey. Multimedia tools and applications, 78, 15169-15211. https://doi.org/10.1007/s11042-018-6894-4
    » https://doi.org/10.1007/s11042-018-6894-4
  • Jeong, B., Yoon, J., & Lee, J.-M. (2019). Social media mining for product planning: A product opportunity mining approach based on topic modeling and sentiment analysis. International Journal of Information Management, 48, 280-290. https://doi.org/10.1016/j.ijinfomgt.2017.09.009
    » https://doi.org/10.1016/j.ijinfomgt.2017.09.009
  • Joung, J., & Kim, H. M. (2021). Automated keyword filtering in latent Dirichlet allocation for identifying product attributes from online reviews. Journal of Mechanical Design, 143(8), 084501. https://doi.org/10.1115/1.4048960
    » https://doi.org/10.1115/1.4048960
  • Kaur, G., & Sharma, A. (2023). A deep learning-based model using hybrid feature extraction approach for consumer sentiment analysis. Journal of Big Data, 10(1), 5. https://doi.org/10.1186/s40537-022-00680-6
    » https://doi.org/10.1186/s40537-022-00680-6
  • Kotler, P., & Keller, K. L. (2016). Marketing management (15th ed.). Pearson Education.
  • Kumar, G., & Parimala, N. (2020). An integration of sentiment analysis and MCDM approach for smartphone recommendation. International Journal of Information Technology & Decision Making, 19(4), 1037-1063. https://doi.org/10.1142/S021962202050025X
    » https://doi.org/10.1142/S021962202050025X
  • Latif, S., Qadir, J., Farooq, S., & Imran, M. A. (2017). How 5G wireless (and concomitant technologies) will revolutionize healthcare?. Future Internet, 9(4), 93. https://doi.org/10.3390/fi9040093
    » https://doi.org/10.3390/fi9040093
  • Li, H., & Ma, L. (2020). Charting the path to purchase using topic models. Journal of Marketing Research, 57(6), 1019-1036. https://doi.org/10.1177/0022243720954376
    » https://doi.org/10.1177/0022243720954376
  • Li, L. (2011). Introduction: Advances in e-business engineering. Information Technology and Management, 12(2), 49-50. https://doi.org/10.1007/s10799-011-0100-y
    » https://doi.org/10.1007/s10799-011-0100-y
  • Li, P., Huang, L., & Ren, G. J. (2020). Topic detection and summarization of user reviews. arXiv preprint. https://doi.org/10.48550/arXiv.2006.00148
    » https://doi.org/10.48550/arXiv.2006.00148
  • Liu, Z. (2024). A financial analysis of Apple based on its external and internal environment. Journal of Education, Humanities and Social Sciences, 30, 97-104. https://doi.org/10.54097/tyj6y326
    » https://doi.org/10.54097/tyj6y326
  • Mahadevan, A., & Arock, M. (2020). Integrated topic modeling and sentiment analysis: A review rating prediction approach for recommender systems. Turkish Journal of Electrical Engineering and Computer Sciences, 28(1), 107-123. https://doi.org/10.3906/elk-1905-114
    » https://doi.org/10.3906/elk-1905-114
  • Medhat, W., Hassan, A., & Korashy, H. (2014). Sentiment analysis algorithms and applications: A survey. Ain Shams engineering journal, 5(4), 1093-1113. https://doi.org/10.1016/j.asej.2014.04.011
    » https://doi.org/10.1016/j.asej.2014.04.011
  • Micu, A., Micu, A. E., Geru, M., & Lixandroiu, R. C. (2017). Analyzing user sentiment in social media: Implications for online marketing strategy. Psychology & Marketing, 34(12), 1094-1100. https://doi.org/10.1002/mar.21049
    » https://doi.org/10.1002/mar.21049
  • Neviarouskaya, A., Prendinger, H., & Ishizuka, M. (2011). SentiFul: A lexicon for sentiment analysis. IEEE Transactions on Affective Computing, 2(1), 22-36. https://doi.org/10.1109/T-AFFC.2011.1
    » https://doi.org/10.1109/T-AFFC.2011.1
  • Pang, B., & Lee, L. (2008). Opinion mining and sentiment analysis. Foundations and Trends® in Information Retrieval, 2(1-2), 1-135. https://doi.org/10.1561/1500000011
    » https://doi.org/10.1561/1500000011
  • Pang, B., Lee, L., & Vaithyanathan, S. (2002). Thumbs up? Sentiment classification using machine learning techniques. arXiv preprint. https://doi.org/10.48550/arXiv.cs/0205070
    » https://doi.org/10.48550/arXiv.cs/0205070
  • Pew Research Center. (2022). Internet, smartphone, and social media use in advanced economies: 2022. https://www.pewresearch.org/global/2022/12/06/internet-smartphone-and-social-media-use-in-advanced-economies-2022/
    » https://www.pewresearch.org/global/2022/12/06/internet-smartphone-and-social-media-use-in-advanced-economies-2022/
  • Paul, M. J., & Dredze, M. (2011). You are what you tweet: Analyzing Twitter for public health. Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media, 265-272.
  • Prabowo, R., & Thelwall, M. (2009). Sentiment analysis: A combined approach. Journal of Informetrics, 3(2), 143-157. https://doi.org/10.1016/j.joi.2009.01.003
    » https://doi.org/10.1016/j.joi.2009.01.003
  • Ramshankar, N., & Prathap, J. (2023). Automated sentimental analysis using heuristic-based CNN-BiLSTM for e-commerce dataset. Data & Knowledge Engineering, 146, 102194. https://doi.org/10.1016/j.datak.2023.102194
    » https://doi.org/10.1016/j.datak.2023.102194
  • Ravi, K., & Ravi, V. (2017). Ranking of branded products using aspect-oriented sentiment analysis and ensembled multiple criteria decision-making. International Journal of Knowledge Management in Tourism and Hospitality, 1(3), 317-359. https://doi.org/10.1504/IJKMTH.2017.086816
    » https://doi.org/10.1504/IJKMTH.2017.086816
  • Reimers, N., & Gurevych, I. (2019). Sentence-BERT: Sentence embeddings using Siamese BERT-networks. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing.
  • Salem, M. A., & Maghari, A. Y. (2020). Sentiment analysis of mobile phone products reviews using classification algorithms. In Proceedings of the 2020 International Conference on Promising Electronic Technologies (ICPET), 84-88.IEEE.
  • Sharm, N., Jain, T., Narayan, S. S., & Kandakar, A. C. (2022). Sentiment analysis of Amazon smartphone reviews using machine learning & deep learning. In Proceedings of the 2022 IEEE International Conference on Data Science and Information System (ICDSIS), 1-4. IEEE.
  • Sharma, D., Sabharwal, M., Goyal, V., & Vij, M. (2020). Sentiment analysis techniques for social media data: A review. In Proceedings of the First International Conference on Sustainable Technologies for Computational Intelligence: ICTSCI 2019, 75-90. Springer Singapore.
  • Shayaa, S., Jaafar, N. I., Bahri, S., Sulaiman, A., Wai, P. S., Chung, Y. W., Piprani, A. Z., & Al-Garadi, M. A. (2018). Sentiment analysis of big data: Methods, applications, and open challenges. IEEE Access, 6, 37807-37827.
  • Singla, Z., Randhawa, S., & Jain, S. (2017). Sentiment analysis of customer product reviews using machine learning. In 2017 International Conference on Intelligent Computing and Control (I2C2). IEEE. https://doi.org/10.1109/I2C2.2017.8321910
    » https://doi.org/10.1109/I2C2.2017.8321910
  • Sivakumar, M., & Uyyala, S. R. (2021). Aspect-based sentiment analysis of mobile phone reviews using LSTM and fuzzy logic. International Journal of Data Science and Analytics, 12(4), 355-367. https://doi.org/10.1007/s41060-021-00277-x
    » https://doi.org/10.1007/s41060-021-00277-x
  • Statista. (2023). Global smartphone sales to end users since 2007. https://www.statista.com/statistics/263437/global-smartphone-sales-to-end-users-since-2007/
    » https://www.statista.com/statistics/263437/global-smartphone-sales-to-end-users-since-2007/
  • Sun, C., Huang, L., & Qiu, X. (2019). Utilizing BERT for aspect-based sentiment analysis via constructing auxiliary sentence. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 380-385.
  • Time Magazine. (2014). IBM’s Simon was the first smartphone 20 years too soon. https://time.com/3137005/first-smartphone-ibm-simon/.
    » https://time.com/3137005/first-smartphone-ibm-simon/.
  • Turney, P. D. (2002). Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews. arXiv preprint cs/0212032. https://doi.org/10.48550/arXiv.cs/0212032
    » https://doi.org/10.48550/arXiv.cs/0212032
  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems (pp. 6000-6010). Curran Associates Inc. https://doi.org/10.48550/arXiv.1706.03762
    » https://doi.org/10.48550/arXiv.1706.03762
  • Vayansky, I., & Kumar, S. A. (2020). A review of topic modeling methods. Information Systems, 94, 101582. https://doi.org/10.1016/j.is.2020.101582
    » https://doi.org/10.1016/j.is.2020.101582
  • Wang, F., Yang, Y., Tso, G. K., & Li, Y. (2019). Analysis of launch strategy in cross-border e-commerce market via topic modeling of consumer reviews. Electronic Commerce Research, 19, 863-884. https://doi.org/10.1007/s10660-019-09368-1
    » https://doi.org/10.1007/s10660-019-09368-1
  • Wang, W., Wang, H., & Song, Y. (2017). Ranking product aspects through sentiment analysis of online reviews. Journal of Experimental & Theoretical Artificial Intelligence, 29(2), 227-246. https://doi.org/10.1080/0952813X.2015.1132270
    » https://doi.org/10.1080/0952813X.2015.1132270
  • World Economic Forum. (2018). Remembering the world’s first smartphone: Simon. https://www.weforum.org/agenda/2018/03/remembering-first-smartphone-simon-ibm/
    » https://www.weforum.org/agenda/2018/03/remembering-first-smartphone-simon-ibm/
  • Xu, G., Liu, B., Shu, L., & Yu, P. S. (2019). BERT post-training for review reading comprehension and aspect-based sentiment analysis. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2324-2335.
  • Xun, G., Li, Y., Zhao, W. X., Gao, J., & Zhang, A. (2017). A correlated topic model using word embeddings. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI) (Vol. 17, 4207-4213).
  • Yadav, A., & Vishwakarma, D. K. (2020). Sentiment analysis using deep learning architectures: a review. Artificial Intelligence Review, 53(6), 4335-4385. https://doi.org/10.1007/s10462-019-09794-5
    » https://doi.org/10.1007/s10462-019-09794-5
  • Yang, B., & Cardie, C. (2014). Context-aware learning for sentence-level sentiment analysis with posterior regularization. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (325-335).
  • Yiran, Y., & Srivastava, S. (2019). Aspect-based sentiment analysis on mobile phone reviews with LDA. In Proceedings of the 2019 4th International Conference on Machine Learning Technologies (101-105).
  • Zhai, M., Wang, X., & Zhao, X. (2024). The importance of online customer reviews characteristics on remanufactured product sales: Evidence from the mobile phone market on Amazon.com. Journal of Retailing and Consumer Services, 77, 103677. https://doi.org/10.1016/j.jretconser.2023.103677
    » https://doi.org/10.1016/j.jretconser.2023.103677
  • Zhai, Z., Liu, B., Xu, H., & Jia, P. (2011). Constrained LDA for grouping product features in opinion mining. In Advances in Knowledge Discovery and Data Mining: 15th Pacific-Asia Conference, PAKDD 2011, Shenzhen, China, May 24-27, 2011, Proceedings, Part I (Vol. 15, pp. 448-459). Springer Berlin Heidelberg.
  • Zhan, J., Loh, H. T., & Liu, Y. (2009). Gather customer concerns from online product reviews: A text summarization approach. Expert Systems with Applications, 36(2), 2107-2115. https://doi.org/10.1016/j.eswa.2007.12.039
    » https://doi.org/10.1016/j.eswa.2007.12.039
  • Zhang, L., Wang, S., & Liu, B. (2018). Deep learning for sentiment analysis: A survey. Data Mining and Knowledge Discovery, 8(4), e1253. https://doi.org/10.1002/widm.1253
    » https://doi.org/10.1002/widm.1253
  • Zhang, M., Sun, L., Li, Y., Wang, G. A., & He, Z. (2023). Using supplementary reviews to improve customer requirement identification and product design development. Journal of Management Science and Engineering, 8(4), 584-597. https://doi.org/10.1016/j.jmse.2023.03.001
    » https://doi.org/10.1016/j.jmse.2023.03.001
  • Zheng, L., Wang, H., & Gao, S. (2018). Sentimental feature selection for sentiment analysis of Chinese online reviews. International Journal of Machine Learning and Cybernetics, 9(1), 75-84. https://doi.org/10.1007/s13042-015-0347-4
    » https://doi.org/10.1007/s13042-015-0347-4
  • Funding
    The author reported that there was no funding for the research in this article.
  • Plagiarism Check
    RAC maintains the practice of submitting all documents approved for publication to the plagiarism check, using specific tools, e.g.: iThenticate.
  • Peer Review Method
    This content was evaluated using the double-blind peer review process. The disclosure of the reviewers’ information on the first page, as well as the Peer Review Report, is made only after concluding the evaluation process, and with the voluntary consent of the respective reviewers and authors.
  • Data Availability
    The author claims that all data used in the research have been made publicly available, and can be accessed via the Harvard Dataverse platform:
    Jabeen, Shaista, 2024, "Replication Data for: "Decoding consumer sentiments: advanced NLP techniques for analyzing smartphone reviews" published by RAC-Revista de Administração Contemporânea, Harvard Dataverse, V1. https://doi.org/10.7910/DVN/1LXMYJ
    RAC encourages data sharing but, in compliance with ethical principles, it does not demand the disclosure of any means of identifying research subjects, preserving the privacy of research subjects. The practice of open data is to enable the reproducibility of results, and to ensure the unrestricted transparency of the results of the published research, without requiring the identity of research subjects.
  • JEL Code:
    M30.
  • Peer Review Report:
    The Peer Review Report is available at this external URL.

  • # of invited reviewers until the decision:

Edited by

  • Editor-in-chief:
    Paula Chimenti (Universidade Federal do Rio de Janeiro, COPPEAD, Brazil) https://orcid.org/0000-0002-6492-4072
  • Associate Editor:
    André Luís Araujo da Fonseca (Universidade Federal do Rio de Janeiro, Brazil) https://orcid.org/0000-0002-1318-7156

Data availability

The author claims that all data used in the research have been made publicly available, and can be accessed via the Harvard Dataverse platform:

Jabeen, Shaista, 2024, "Replication Data for: "Decoding consumer sentiments: advanced NLP techniques for analyzing smartphone reviews" published by RAC-Revista de Administração Contemporânea, Harvard Dataverse, V1. https://doi.org/10.7910/DVN/1LXMYJ

RAC encourages data sharing but, in compliance with ethical principles, it does not demand the disclosure of any means of identifying research subjects, preserving the privacy of research subjects. The practice of open data is to enable the reproducibility of results, and to ensure the unrestricted transparency of the results of the published research, without requiring the identity of research subjects.

Publication Dates

  • Publication in this collection
    01 Nov 2024
  • Date of issue
    2024

History

  • Received
    17 May 2024
  • Reviewed
    07 Sept 2024
  • Accepted
    10 Sept 2024
  • Published
    23 Sept 2024
location_on
Associação Nacional de Pós-Graduação e Pesquisa em Administração Av. Pedro Taques, 294,, 87030-008, Maringá/PR, Brasil, Tel. (55 44) 98826-2467 - Curitiba - PR - Brazil
E-mail: rac@anpad.org.br
rss_feed Stay informed of issues for this journal through your RSS reader
Accessibility / Report Error