Why discriminative AI will continue to dominate enterprise AI adoption in a world flooded with discussions on generative AI

Recently, McKinsey released an extensive research paper on the economic potential of generative AI. Despite its topic, this report includes a very loud message that enterprise AI adopters should note: “Traditional advanced-analytics and machine learning algorithms are highly effective at performing numerical and optimization tasks such as predictive modeling, and they continue to find new applications in a wide range of industries.”
What the report calls “traditional advanced-analytics,” I prefer to call “discriminative AI.” And in my opinion, there are three main reasons discriminative AI will continue to hold importance in traditional enterprises.
DISCRIMINATIVE AI VERSUS GENERATIVE AI
In enterprise artificial intelligence (enterprise AI) technologies, there are two main types of models: discriminative and generative. Discriminative models are used to classify or predict data, while generative models are used to create new data.
Today, discriminative AI models are more widely used in enterprise AI adoption. This is because they are better suited for tasks that require accurate classification or prediction, such as fraud detection, customer segmentation, and risk assessment. Generative AI models are still in their early stages of development and they are not yet as accurate or reliable as discriminative AI models for these types of tasks.
WHY DISCRIMINATIVE AI WILL CONTINUE TO DOMINATE ENTERPRISE AI ADOPTION
These three are the main reasons why discriminative AI will likely continue dominating enterprise AI adoption in the near future.
Accuracy And Reliability
Discriminative AI models are more accurate and reliable than generative AI models for tasks that require precise classification or prediction. This is because discriminative AI models are trained on labeled data, meaning they have been taught explicitly to distinguish between different data classes.
Generative AI models, on the other hand, are trained on unlabeled data, which means that they have to learn to differentiate between different classes of data on their own. This task is more challenging, leading to less accurate and reliable models.