(2024-04-24) Cagan Nika AI Product Management
Martin Cagan on AI Product Management. Recently I have co-authored a few articles allowing me to highlight different product coaches, and in this article, I’d like to highlight Marily Nika. Marily specializes in helping product teams create AI-powered products and services. She has a PhD in machine learning.... she runs a popular course teaching product managers what they need to know to build effective AI-powered products
to be clear on nomenclature, when we refer to “AI Product Management” we are referring to the creation of AI-powered products
clarify that we are focusing here on AI-powered applications, and not on the underlying AI infrastructure
The distinction is similar to the difference between a platform product and an experience product.
Most products have significant risks, and product teams are cross-functional so that they have the range of skills needed to address those risks. Few products highlight the critical need for strong product management more than AI-powered products.
The term “’AI” includes both traditional AI, such as machine learning, and generative AI.
Examples of AI applications include smart home devices that employ speech and natural language understanding to process human voices, fraud detection systems, and, in the case of generative AI, advanced functions like content creation, summarization, and synthesis.
AI-powered products are especially challenging when it comes to the product risks.
Note that while AI product managers may not have ML scientists as dedicated members of their core product team, especially in the context of AI application products, they will frequently want to consult with ML scientists
Feasibility Risk
Generative AI, by its nature, is probabilistic, not deterministic.
Certain types of products and capabilities are very well suited to probabilistic solutions, and others are not. This is perhaps the most fundamental consideration.
If the product is a personalized news feed, then if, on occasion, a recommendation is not perfectly aligned with the user’s stated preferences, this can likely be managed in the user experience. However, if the product is controlling a dose of medication, such as insulin, then a dosage outside of medical guidelines would be unacceptable.
This leads directly to the critical topic of quality assurance. What are acceptable error rates? What are the possible types of mistakes? How will the product handle each type of mistake? Are there ways to mitigate mistakes with the user experience?
The quality of the data used to train the AI model is critical. Product managers need to have a clear and deep understanding of the training data
It is also important to mention technical debt and infrastructure and address questions such as: Does the company have the necessary technical infrastructure to support the AI product?
High technical debt can hinder scalability, and overall feasibility and viability.
Usability Risk
For AI products, we need to design user experiences that clearly set expectations about what the technology can and can’t do, and at least conceptually, how the product works. This transparency is key to building trust and avoiding frustration when encountering limitations.
Finding the right balance between accuracy, speed, operational cost, and user experience is essential.
Value Risk
we can also see many examples today of AI-products that are AI in name only. So the AI product manager’s first responsibility is ensuring that the AI-powered features and products deliver genuine, incremental value to users and customers.
Our job is to ensure the perceived value is clear and compelling
combining quantitative evidence (e.g. A/B testing) with qualitative insights (e.g. user testing).
We also need to collaborate closely with product marketing to ensure we can communicate this value effectively
Viability Risk
today the costs can be quite high.
Further, for several types of products, there are genuine questions about data provenance and copyright for the training data, biases in that data, and the ramifications of recommendations based on this data.
Realize that with probabilistic solutions, it is very possible for an AI-powered system to both save lives (by performing a critical task more accurately than humans), yet also put lives in danger (by making a mistake). Companies today must deal proactively with these ethical considerations.
Similarly, the AI product manager must strive to anticipate the consequences of bad actors, using the products in illegal or inappropriate ways.
The AI product manager is expected to consider and analyze these risks, and work with the company’s legal team to protect customers as well as the company.
As with mobile PM, over time our expectation is that all PM’s will need to have at least a foundation level of these skills. Most product managers will be expected to be AI product managers in the future, in the sense that it will be expected that product managers understand how the enabling AI technology works, what are the range of risks involved, and the work required to mitigate the risks.
Edited: | Tweet this! | Search Twitter for discussion