The usage of social media platforms has excessively grown over the past decade due to their ease of excess, evolution, and accessibility. This increased usage has proved to be quite advantageous in terms of the benefits of staying connected, sharing posts and ideas, and exchanging thoughts, but it also has its fair share of drawbacks. These drawbacks arise mainly due to a lack of social media usage knowledge among consumers, their problem in understanding and comprehending non-native languages, and false efforts to obtain a response from other people within their community. As a result, of which, fake news, which has no real validity, starts accumulating over time and begins to appear in the feed of every social media consumer causing ambiguity and uncertainty. To sustain the integrity of social media platforms, such news and content must be distinguished from the real one. Recent advances in Artificial Intelligence (AI) and Machine Learning (ML) have accelerated the creation of autonomous systems capable of achieving any desired result in a minimal period. This research proposes a novel approach for detecting fake news. Fake news dataset acquired from online sources is first preprocessed, textual features are extracted based on N-gram methods such as Term Frequency-Inverse Document Frequency (TF-IDF) and Bag of Words (BOW). Latent Dirichlet Allocation (LDA) based topic modeling is also employed on data compilation to derive dominant topics from it which are scaled later on. Finally, the textual features and topic vectors are assessed on standalone ML classifiers Support Vector Machine (SVM), Logistic Regression (LR), and Na?ve Bayes (NB) and ensemble ML classifiers Random Forest (RF) and Gradient Boost (GB) where results are evaluated based on several performance metrics.