Incorporating AI Into Your Product Strategy

What makes AI a game-changing technology?

The pace and sophistication of AI-driven insights can exceed human capability – AI can analyze massive datasets quickly and uncover patterns, trends, and correlations that humans could not. Unlocking this level of analysis enables businesses to make informed decisions, create content to personalize customer experiences, and optimize operations.

The AI boom is analogous to the game-changing impact of smartphones – most mobile apps flopped after the iPhone and first Androids came out. The products that endured the test of time created life-changing experiences by leveraging what smartphone technology made possible. Consider examples like:

  • Google Maps – used the phone’s GPS capabilities to enable millions to get navigation and real-time traffic data without having to buy a new car
  • Facebook – used the phone’s camera to make it easy to share pictures (and locations) to keep users’ feeds engaging
  • Angry Birds used the touchscreen to emulate a slingshot in a fun game

Strategic Considerations

When should a company start building AI features?

Start as soon as possible to mitigate your risk of being disrupted – most companies should begin developing their first AI features now, especially if they are concerned about being displaced by an AI-first startup. 

Remember that AI is fueled by data – you don’t want your AI product to go down in flames after the novelty of AI wears off. If you don’t have proprietary data that can be used to train AI models to personalize your customer experience or generate content based on top-notch examples, your AI strategy will fail and your product or feature will be forgotten or copied by a competitor.

Delayed adoption can create a data hurdle – companies who wait too long to respond to a competitive threat may not get access to the data needed to build a better model and therefore a compelling product. This is especially important when switching costs are high, such as with an enterprise B2B product that’s deeply integrated into a company’s workflows. Consider the machine learning data flywheel:

You can still be successful if you are late to market but have a competitive advantage that lets you leapfrog others – some companies are more conservative when it comes to adopting new technologies or responding to competitive threats, and rightfully so. Consider how Microsoft Teams leapfrogged Slack in terms of active users, despite launching their product after Slack. In this case, Microsoft used their installed Office 365 customer base to drive adoption. 

What is an approachable way to build your first AI feature? 

Get buy-in for financial investment before you begin building – you’ll likely need a team of 3-4 people (product, design, engineering, and data science) working on your first AI feature for a full year before you have a market-ready product. In addition to your initial team’s salaries, consider the cost of additional FTEs you might need to add and the risk of the project running over, especially if your team will be learning about AI when they begin. 

Test an internal use case first – consider dipping your toes in the water by exploring an internal facing use case if your company is not yet ready to build a customer-facing AI feature. To do so, ask each of your functional team leads to identify existing tasks that their team considers to be time-consuming, tedious, boring, hard, etc. Then, consider whether you can automate those tasks by either:

  • Buying an existing AI product (like Gong, which helps sales leaders coach their team to move deals forward faster)
  • Building an internal product (like we did with our client HUNGRY, where we built an internal tool for CSMs to generate proposals faster and within margin guidelines)

How should you incorporate AI into your product strategy? 

AI is a tool to help you realize your product strategy – don’t develop an AI strategy that’s separate from your product or company strategy because your overall product strategy should always determine how you leverage AI. To keep your product strategy top-of-mind, use a framework (such as our Vision-Led Product Management framework) that defines strategy as your multi-year plan to realize a customer journey vision. This vision should clearly lay out what you want your customer experience to look like in the future.

Consider the current and near-term possibilities that AI can help unlock – due to AI’s rapid progress, its capabilities are constantly changing. Regularly ask yourself the following questions, then update your customer journey vision and work backwards to define the strategic milestones (e.g., new data or features) that will let you bring that vision to life: 

  • What outcomes can you deliver to customers with AI in a way that was previously impossible, time-consuming, or both?
  • What parts of your customers’ workflows can your product expand into with AI?
  • Are you able to use AI and data to solve new problems for customers that would otherwise take them a long time to solve manually? Which problems could you help them solve?

Planning Your First AI Feature: the AI Product Strategy Pyramid

What do you need when building AI features? What is the AI Product Strategy Pyramid?

The AI Product Strategy Pyramid contains the prerequisites of a successful development process and release – each component of the Foundation, Internal Mechanics, and Customer Experience elements will help your team launch efficient and effective AI features.

The Foundation of your AI product strategy comprises 3 key elements:

  • Key Outcome / Use Case(s) – start with the customer problem, which we call the key outcome. Identify the metric the customer is trying to improve. For example, key outcomes for B2B products usually revolve around ROI; you might set out to develop a product or feature that will help customers increase revenue or cut costs relative to what they spend on your product.
  • Data – you need clean training data to teach the model what a great output looks like. This often requires significant work for data engineers to clear, tag, and streamline data pipelines.
  • Model – companies can take the buy, build, or partner paths when choosing an AI model. Buying a third party model makes the most sense for most companies that are just starting out with AI. As your AI expertise grows, you can switch to building–or at least hosting–your own model. Few companies would benefit from a partnership model; Apple and OpenAI’s partnership to deliver Apple Intelligence is a well-known but unique case. 

3 pillars of Internal Mechanics determine how your AI feature will actually be built and maintained:

  • Team – many companies don’t have product managers, designers, engineers, and data scientists with AI experience because these skills are rare and command high salaries. If you’re not ready to hire a seasoned team, consider sending a “tiger team” (a small cross-functional squad that will build your first AI feature) to AI training or hiring an AI advisor/ consultant to provide tailored hands-on training.
  • Governance – there are myriad regulatory, compliance, ethical, and monitoring-based considerations when launching an AI feature–and errors in any of these areas can have severe consequences (e.g.,the lawsuit Character.ai is facing for suggesting harmful actions and inappropriate ideas to minors). Establish roles and responsibilities around designing, building, approving, and monitoring AI features internally. You also need to assign clear accountability for these features.
  • Operations – because AI models can generate a wide variety of responses, you must consider how to support AI features. You’ll also need to consider how AI features are tested during development and how to analyze AI feature usage in production.

The Customer Experience section of the pyramid determines how you will expose your AI model to customers (or users):

  • User Experience / Interface (UX / UI) – chatbots are a well-known design pattern thanks to ChatGPT, but you should consider other patterns such as using traditional input forms but prepopulating data using AI, or removing steps from your existing workflow by automating them with AI. Consider how explainability can help build trust in your AI. 
  • Feedback Loops – no AI model is perfect. Feedback loops let your users provide input on whether the model’s response was clear, appropriate or accurate. Without this, the data flywheel we saw above won’t start spinning.
  • Trust – AI might reignite data privacy and security concerns amongst your customers and users. Take a proactive approach to building trust with the user groups that come into contact with your AI feature. When appropriate, explain how you’re using and protecting their data.

Note: For a good example of how to use explainability to build trust in your AI, see how Perplexity cites sources to “pull back the curtain” and explain its responses. 

How should you identify your first AI use case?

Use traditional product management techniques to understand customer problems – customer discovery interviews, market and competitive research, and surveys can help you identify which problems are most painful, and where existing solutions fall short in helping customers achieve the outcome metrics they seek.

Look for characteristics associated with good generative AI use cases, such as:

  • Frequency – customer experiences the problem / situation often and has no workarounds
  • Data – problem requires lots of data that you already have in your product (or could collect)
  • Content – solution requires creating new content such as data, text, audio, images or videos
  • Repetitive – users have to repeat the same action over and over, often based on the same logic

Example: personalized nutrition advice

Identify the problem – consider the consumer key outcome of losing weight. Nutrition is one factor consumers consider when trying to lose weight. However, nutritionists are hard to find, expensive, and always ask clients to tediously chronicle what they eat. 

Consider the benefit AI could provide – an AI nutritionist could offer more personalized advice at a lower cost and greater scale than a human nutritionist.

Determine whether the use case is compatible with what AI is best at – this is a good use case for AI because the product requires large amounts of data (e.g., diet, weight, sleep, exercise, etc.,) but can quickly generate a personalized nutrition plan for each user. 

How should you allocate resources toward your first AI feature?

Consider using a “tiger team” model – since AI is a new technology that your team might not be familiar with, it can be more efficient to allocate a finite number of your top performers to work on the first feature for a fixed amount of time (often 1 year for tiger teams with little to no AI experience). This approach allows you to right-size your investment based on your confidence in building an AI feature and the expected return from implementing the solution.

Example: right-sizing “tiger team” investment for an inexperienced team

Assess current skills – suppose nobody on your team has experience building AI features.

Build a lean, viable team – allocate 3-4 people to your “tiger team”: a product manager who can design and market the new feature, a full-stack engineer (or a front-end and back-end engineer), and a data scientist. 

Estimate the cost of your time investment – suppose their annual salaries total $800k and they need 6 months to learn how to build and launch a feature based on a use case that was identified during your annual planning process. Your upfront investment is $400k.

Evaluate ROI and determine whether your plan makes sense – compare the expected incremental revenue or cost savings from what the tiger team is planning to ship to see if it’s worth more than their time (in this example, $400-800k). Conduct strategic customer discovery interviews to gauge customer willingness to pay to determine whether you should be able to generate a positive ROI. If you don’t think you can “invest proportionate to confidence” based on your current plan, adapt. You can choose a different time box if you are more or less confident (e.g., 3 months or 1 year instead of 6 months).

How do you decide between a build or buy approach for your model?

The build vs. buy decision depends on whether you plan to pay for access to an AI model via API or host open-source API models – a “buy” example is to call OpenAI’s APIs with a prompt and wrap the response inside of your product. A “build/host” example is to use Meta’s Llama for generative text or Stable Diffusion for image generation within your own environment. Both options have pros and cons: 

OptionAdvantagesDisadvantages
BuyingFast time to market – you don’t need a lot of AI expertise on your team or to build out expensive AI infrastructure

Latest and greatest – your product and customers will benefit as they improve the model(s)
Data sharing – you’ll need to share a lot of your data when calling the API, which increases the risk of a data breach

Vendor reliance – you won’t have much leverage on your vendor, especially in terms of paying for the expensive AI infrastructure they’re hosting
Building / Hosting YourselfLower data risk – you keep your data within your own environment

Faster responses – since you won’t have to make an API call over the network, you’ll likely get faster responses
Staffing – you’ll need experienced AI engineers, dev ops and data science folks to build and maintain your AI infrastructure and models

Cost – you have to pay for data, people and hardware to host, maintain and improve your models

What are the different levels of customization that you can apply to the hosting approach you select?

There are 3 levels of customization for both bought and internally built models:

  • Level 1: Prompt Only 
    • You call the model with only a prompt for what you’d like it to do, and you rely on pre-trained data to produce a good response for your use case. None of your proprietary data is passed in, which means the risk of a data breach is low. 
    • Level of Effort: low/ 1-2 months, or more for a novice AI team
  • Level 2: Retrieval Augmented Generation (RAG)
    • You retrieve your proprietary data to augment the model’s response. This data is passed in as additional context along with the prompt. For example, if you want the model to auto-generate a customer support response, you could pass in several examples of your highest-rated responses for it to use as “great responses”. Note that the work to identify and pass in examples of “great responses” isn’t always easy. 
    • Level of Effort: medium/ 2-3 months beyond passing in the prompt
  • Level 3: Fine Tuned
    • You provide a lot of proprietary data to the model to fine tune responses, which allows you to produce a more personalized response. For example, if we build on the customer support response use case, you could fine tune a model based on hundreds of 5-star responses so that the response represents your brand voice and policies.
    • Level of Effort: high/ 5-8 months and significant infrastructure expenses to train and deploy model

Building and Launching Your First AI Feature

How should you approach building your first AI feature?

Many aspects of building AI features are similar to building non-AI features – building an AI feature requires identifying the target audience and use case, providing product specs and designs to clarify scope, and building the feature itself. 

AI features require significant data cleansing and organization – to feed data into an AI model and personalize the response for a given user, you’ll likely need to cleanse your data and organize it. “Garbage in, garbage out” applies to AI projects; it’s worth spending time to ensure that inputs to the model are accurate and can be used effectively to produce a better output.

Model training and testing are an important part of the development process – once you have good data, you must train your model with a subset of that data, then test it with the remainder to see if it’s producing the expected outputs. Testing and training should be an ongoing process that continues through all iterations of the product.

Employ regular sprint demos and status updates – keep internal stakeholders up-to-date on what the tiger team has done and learned. This helps stakeholders understand new impacts of the project and allows the team to course correct as needed. For example, your tiger team might have prototyped a model and realized they need a new data point from users, which might take extra time to build and integrate into the model. 

Note: tech teams need to deploy updated models to production as they would with new code. If you’re curious about the actual models and products used for each phase of the AI feature development process, this LLM training landscape graphic is a good starting point.

How should you launch an AI feature? 

Limit risk by using an alpha/ beta/ general availability release model – launch the feature to a limited subset of users to gather qualitative feedback and limit the risk of your model delivering incorrect or confusing responses before you roll it out to a larger user base. Make it accessible to 1-2 B2B customers or 100-200 D2C customers for the alpha release. While this might make your feature announcement less splashy, it will protect your reputation if something goes wrong. 

Define success metrics before the launch – have a clear understanding of how you want your AI feature to help your customers and your company. After launch, measure those KPIs and report back to the team and stakeholders so you can evaluate performance and decide how to move forward. Examples of possible goals include:

  • Trial or adoption (how many users try the AI feature)
  • Repeat usage (how many users come back)
  • Customer satisfaction (e.g., use a feedback survey–with a clear target for what “good” means, such as 4.5/5 stars based on at least 50 reviews–to determine whether you’re adding value for your customers)

After the Launch of Your First AI Feature

What should you do if you see great results?

A successful launch usually triggers additional investment – if the feature is still in alpha/ beta, consider releasing it to more customers and/or iterating on user feedback to improve model accuracy and customer value.

Keep ROI calculus in mind when increasing investment – if you want to expand your AI feature/ program, you should see a clear reason to continue funding the AI squad. Leadership must also acknowledge that some of the squad’s future bandwidth will go toward maintaining the new AI feature and infrastructure, in addition to iterating on it or innovating with new AI features. 

Consider expanding the team – if customers are responding positively, it might make sense to add more AI team members, such as a designer or marketer. If feedback, traction, and impact are significant, you should explore adding another cross-functional AI squad to build features in parallel to the first squad. 

What should you do if you see lackluster results?

Understand the reason behind the poor performance – if the problem is a “quick” or simple fix (e.g., a marketing issue), it’s probably worth continuing your investment, iterating, and testing again. If a fundamental issue (e.g., a problem no one cares about, inability to get the right data, etc.) is preventing the feature from performing, you need to reassess whether the use case makes sense for your company at this time.

Primary reasons an initial AI feature fails its first release
Area to assessQuestions to ask
MarketingAre users aware of this new AI feature? 
Did they get an email or call from CS? 
Was it clear how to try the feature when they logged into the product?
Use caseDid we choose the right use case? 
Did most users try the feature (suggesting there was demand for it)? 
Or, did we miss the mark on the urgency and importance of this use case/ key outcome and build something no one cares about?
SolutionDid we choose the right problem to solve, but not nail the actual solution? 
Was something missing that made our feature less interesting than the current way users solve this problem? 
Was the UX confusing or buggy? 
Was low quality input data causing the model to produce low-quality responses?

Overall

What are the most important things to get right?

Start with the customer problem or key outcome, then make sure AI is the appropriate solution – as with any feature, the ultimate goal is to ensure that you’re delivering value to customers and the business. 

Document your data strategy – create a solid data foundation upon which to build your AI features. From the beginning, you should be asking how your model will improve over time and how you can ensure that your data creates a moat that prevents competitors from copying your feature. 

What are common pitfalls?

Checking the AI box  – don’t just build a chatbot wrapper for ChatGPT and expect success. Customers and investors will see through this and be disappointed. The power of AI is using your data to build more personalized experiences, and any feature that leverages AI should deliver on that promise. 
Assuming AI isn’t relevant to your business – don’t stick to a strategy that was created before AI matured. Just like you revisit your strategy when there’s a major market shakeup or regulatory change, you should revisit your product strategy in light of major technical advancements like AI.

Responses