Running AI Experiments Across Go-to-Market Functions

Why is it important to have a strategy around running AI experiments? Why are AI experiments an effective approach to beginning use of AI?

AI is changing so fast that companies need an approach to stay ahead of it – with the rapid evolution of generative AI, experimentation is crucial. The potential benefits are tremendous, but you risk obsolescence if you invest heavily in a single, untested approach as the industry swiftly changes. Scaling AI initiatives should be based on proven patterns of success—experimentation serves as the precursor to large-scale implementation.

Experiments mitigate risk and foster agility – rather than committing extensive resources to unproven strategies, starting with experiments provides an opportunity to learn and adapt before significant investments are made. Experimentation fosters agility. By testing ideas and strategies incrementally, teams can pivot based on real-time feedback, ensuring alignment with evolving market needs.

Assessing Use Cases

What criteria should you use to evaluate the feasibility and potential impact of a generative AI use case?

Identify low-hanging fruit – productivity-focused use cases present ripe opportunities for Generative AI experimentation. Focus on tasks where Generative AI can outperform humans to maximize impact and mitigate risk like content creation and data synthesis. When it comes to more complex uses, there are 2 main criteria for feasibility evaluation: internal vs. external users, and data sources and availability.

Consideration 1: Internal vs. External Users

Internal UsersExternal Users
Example Use caseEmployee onboarding chatbot – incorporating your employee training and onboarding content library into a chatbot to help onboard employees.Customer Support Automation –  integrating multiple data sources, such as support tickets, product articles, and training content. The system must accurately interpret customer queries and provide relevant responses, potentially requiring human intervention for complex inquiries.
User ExpectationsInternal users are often more forgiving – of inaccuracies or shortcomings in the technology, given their familiarity with internal processes and systems. They are more likely to understand the experimental nature of the technology and provide constructive feedback.External users, like customers or clients, have higher expectations for accuracy and reliability. They may be less tolerant of errors or inconsistencies, which could damage the organization’s reputation if exposed.
CostsExperimenting internally typically involves lower costs and complexity. Internal systems and processes can be more easily integrated and tested without extensive investments in infrastructure or external-facing interfaces.External-facing use cases entail significant investment. They often require comprehensive data integration and sophisticated AI models to ensure accurate and reliable responses. It requires investment in data preparation, model training, and system integration.

Consideration 2: Public vs. Internal Data

Public DataInternal Data
Example Use caseBasic account research – such as reviewing website content, job postings and LinkedIn profiles, can be conducted using publicly available information. This enables quick insights into company backgrounds and key decision-makers.For deeper account insights or personalized outreach – integration with internal systems may be necessary to access proprietary data on customer interactions, purchase history, or product engagement metrics.
User ExpectationsPublicly available data like website content and LinkedIn profiles is readily accessible. This allows for rapid prototyping and experimentation with Generative AI models for companies of any size.Accessing data from internal systems—may require licenses, integration efforts, and careful consideration of data privacy and security. As organizations mature, they may explore integrating Generative AI with internal data sources. This is most effective for deeper insights or personalized outreach.

Takeaway: Start with publicly available information to allow for rapid experimentation – then, you can integrate with internal data sources to enable more advanced and personalized applications of Generative AI.

Note: Generative AI excels in handling unstructured data such as text and video, to generate new content or insights. CRM analytics, which rely on structured data, may not be optimal use cases for Generative AI.

What functions make for strong use cases of AI?

Use CaseChallengeSolution
Product MarketingBuilding buyer journeys and understanding customer pain points through manual methods like surveys and call recordings is time-consuming and inefficient.Generative AI can analyze call transcripts, extract key insights, and generate buyer personas, pain points, and product strengths, streamlining the content creation process for product marketing teams.
Sales Development Representatives (SDRs)Crafting personalized outreach messages for cold leads requires extensive research and customization, leading to low response rates and wasted time.Generative AI can automate the research process, analyze company websites and LinkedIn profiles, and generate tailored email sequences, increasing efficiency and engagement with prospects.
Account ExecutivesConducting thorough account research and crafting personalized messages for large accounts is time-consuming and often results in subpar outreach efforts.Generative AI can ingest and analyze vast amounts of data, including earnings calls and executive profiles, to generate tailored messaging and meeting requests, enabling account executives to engage effectively with key stakeholders.
Employee Onboarding and EnablementTraditional onboarding processes involve overwhelming new hires with extensive training materials and lack real-time support, leading to low retention rates and fragmented learning experiences.Implementing a chatbot powered by Generative AI can provide just-in-time learning support, allowing employees to access relevant information and resources as needed, improving retention and enabling continuous learning. You can also get AI assisted coaching and role playing.
Post-Sales Follow-upsEnsuring prompt and personalized follow-ups with customers after sales calls is challenging due to manual note-taking and email drafting processes.Generative AI can automatically generate follow-up emails based on call recordings and notes, allowing sales representatives to quickly review and send personalized messages, improving customer engagement and closing rates.

Running Experiments

Who should be involved in running AI experiments?

Revenue operations or enablement personnel – this individual serves as a central point of coordination and may belong to teams that often collaborate across various functions within the organization. They play a key role in facilitating the AI experiment and ensuring alignment with broader organizational goals and processes.

Business function representative – it’s essential to involve business representatives who will directly benefit from or interact with the outcomes of the AI experiment. These individuals provide insights into the specific use cases, requirements, and challenges faced by the business, helping to tailor the experiment to address real-world needs effectively.

What are the different tools and tool categories you should consider for leveraging generative AI?

Direct Interaction with ChatGPT and Generative AI Models
What is it? Platforms like OpenAI’s GPT offer direct access to generative AI models for experimentation and query-based interactions. It can be helpful to use these AI models to generate content like buyer journey maps.
LimitationsData availability – ChatGPT only has information up until April 2023. Limited or outdated data can impact the accuracy and relevance of generated content.
Security concerns – utilizing proprietary or sensitive data with generative AI poses security risks, especially in scenarios where the AI model accesses and generates content based on confidential information.
Lack of consistency – generative AI models like GPT-3 may produce different outputs for the same query, leading to inconsistency in results, which could be challenging for certain use cases requiring reliability and repeatability.
OverallAssess the suitability of ChatGPT for use cases based on data availability, security requirements, and the complexity of prompt chaining and reuse.
Vendor-Specific Co-Pilot Integrations
What is it? Many software vendors offer integrated generative AI functionalities, commonly referred to as co-pilots, within their platforms.
LimitationsFragmented user experience – copilots provided by different vendors offer varied functionalities and interfaces, leading to a workflow disruption. Navigating different tools and systems requires more time and effort. 
Limited cross-platform support – some copilots may have limited support for cross-platform integration, making it difficult for users to access their functionalities across different tools or environments, leading to siloed data and workflows.
OverallUtilize co-pilot features within CRM or marketing automation platforms to generate insights or assist with data-driven decision-making, but be aware that there is no one-size-fits-all program.
Unified Data Access and Integration
What is it?Enhancing generative AI capabilities by integrating diverse data sources, both internal and external, to provide comprehensive insights and solutions.
LimitationsPerformance overhead – processing and accessing data from multiple sources through a unified access solution may introduce performance overhead, especially when dealing with large volumes of data or complex data structures, impacting system responsiveness and scalability.
Integration complexity – integrating and maintaining connections with diverse data sources and systems within a unified access solution can be complex and resource-intensive, requiring ongoing efforts to ensure compatibility, reliability, and data integrity.
OverallLeverage tools like Tableau for visualizing and analyzing integrated data from CRM, ERP, and HR systems to unlock deeper insights and facilitate decision-making.
 Point Solutions
What is it?Specific tools designed to address particular use cases with pre-built functionalities, offering ease of use and quick implementation. They have built in account research.
LimitationsLimited scope – point solutions are designed to address specific use cases or functionalities, which may not cover all the requirements of complex or multifaceted projects.
Lack of customization – these solutions offer pre-built functionalities, limiting customization options compared to more flexible platforms.
Vendor lock-in – adopting multiple point solutions from different vendors may lead to vendor lock-in and interoperability challenges, hindering flexibility and integration efforts.
Examples• SalesMotion
Horizontal Solutions
What is it?Workflow engines or platforms that allow for customization and configuration to suit various use cases, often integrating with multiple generative AI models. They are most helpful for customizable workflows and integrations. You can use horizontal solutions in lieu of point solutions entirely, but it will require significantly more funds and technical expertise.
LimitationsComplexity of configuration – horizontal solutions often require significant configuration and setup efforts to tailor them to specific use cases or workflows.
Integration Challenges – integrating horizontal solutions with existing systems and data sources can be complex and time-consuming, requiring expertise in data management and system integration.
Overhead for small-scale projects – for smaller-scale projects or simple use cases, the overhead associated with configuring and maintaining a horizontal solution may outweigh the benefits, making them less suitable.
Examples• Clay
• AnyQuest
• Copy.ai

How do you set reasonable goals for your experiments?

Establish baseline and success criteria – before initiating any experiment, it’s crucial to define a baseline of the current state and clearly establish success criteria. This involves identifying key metrics or indicators that will determine whether the experiment is successful. Having a clear understanding of what success looks like enables effective evaluation and course correction during the experiment.

For examples of AI experiments, see the below LinkedIn posts:

Focus on learning – the primary goal of experiments should be learning. Recognize that experiments may not always yield the desired outcomes, but they provide valuable insights into what works and what doesn’t for the organization. Emphasize learning from the experiment to inform future decisions and strategies.

Garner executive sponsorship – executive sponsorship ensures the outcomes of the experiment are acted upon. Leadership support helps drive the implementation of successful experiments across the organization and ensures that the learnings are integrated into strategic planning and decision-making processes. Cultivate champions and advocates within the organization. These individuals play a crucial role in promoting the successful approach, driving adoption, and facilitating change management efforts to ensure widespread acceptance and utilization.

Consider implementation and scaling – after conducting the experiment and analyzing the results, consider the potential courses of action based on the findings. This may involve scaling the successful approach across the organization, implementing it in specific functions or regions, or further refining the solution based on feedback and insights gained during the experiment.

What are some common failure points in experiments?

Undefined baseline and success criteria – failing to establish a clear baseline of the current state and define success criteria for the experiment can lead to ambiguity in evaluating its outcomes.

Overemphasis on success – being overly focused on achieving success as defined by predetermined outcomes may lead to disappointment or disillusionment if the experiment does not yield the expected results. It’s important to recognize that experiments are opportunities for learning, regardless of the outcome.

Complexity and confounding variables – running experiments with too many variables or changes simultaneously can make it difficult to isolate the impact of individual factors, leading to unclear or inconclusive results.

How much money is it appropriate to spend on running AI experiments?

Enterprise companies are allocating multiple millions – this cost includes setting up their own LLMs, private environments, implementing secure infrastructure, and ensuring compliance with regulatory standards. The investment in experimentation for these companies is substantial due to the scale and complexity of their operations.

Small and medium-sized businesses have more constrained budgets – while they may not invest millions, they may still need to allocate tens of thousands to AI experimentation. SMBs prioritize accessibility and affordability when selecting tools and platforms for experimentation. They might opt for solutions that offer a balance between cost-effectiveness and functionality, even if it means compromising slightly on security and privacy compared to enterprise-grade solutions.

What are best practices for documenting and learning from AI experiments?

Collect data during experimentation – continuously collect data during the experiment to track progress and gather insights. Document key metrics such as the time taken to complete tasks, response rates, or any unexpected outcomes. 

Quantify results – quantify the results obtained from the experiment based on the defined success criteria and compare them to the established baseline. This allows for objective evaluation and identification of areas for improvement.

Iterative evaluation – use the documented learnings to iteratively refine the experiment and make adjustments as necessary. Assess whether the experiment met its goals and if not, identify factors contributing to any deviations from the expected outcomes.

Use Automated Tools – leverage tools to record and transcribe discussions about use cases, strengths, weaknesses, opportunities, and threats. This helps build a repository of use cases and facilitates decision-making.

  • Ex. Idea Mentor – this tool is designed for running experiments and assessing use cases. It provides a secure platform and simplifies the process for business users, eliminating the need for extensive technical expertise. Users can upload various content types and analyze them to gauge the effectiveness of different use cases.

How does privacy affect your use of generative AI?

Tool choice has an effect on security:

  • Least secure: ChatGPT and Gemini – uploading sensitive content directly to a public language model poses the highest security risk. Once uploaded, the data is outside of the organization’s control, increasing the likelihood of unauthorized access or exposure.
  • More secure: Retrieval Augmented generation – this approach involves chunking and sending only relevant portions of the data to LLM platforms for processing. It reduces the risk of exposing the entire dataset while still leveraging external models for analysis and is generally enough for SMBs. While this method improves security, some chunks of the data are still retrievable outside the organization.
  • Most secure: On-premise or private models – deploying language models on-premise or in a private cloud environment ensures complete control over the data and eliminates the need to send sensitive information outside the organization. While this approach offers the highest level of privacy, it requires significant technical expertise and infrastructure investment.

Anonymize and preprocess data for higher security requirements – removing or obfuscating personally identifiable information and sensitive content helps to minimize privacy risks. However, implementing such measures can increase the complexity and cost of the architecture.

How can you set yourself up to adjust to future developments in artificial intelligence?

Do not run experiments that you expect to fail – focus on AI use cases with achievable goals and tangible benefits. Avoid high-risk experiments that are likely to fail due to complexity or unrealistic expectations. Start with simpler projects to build momentum and expertise before tackling more ambitious endeavors. While experiments allow for exploration and learning, business-changing initiatives carry higher stakes and require careful planning and risk management.

Evaluate risk vs. reward – assess the level of risk associated with each AI investment. Consider factors such as the potential impact on the business, the cost of failure, and the scalability of the solution. Prioritize investments that offer a balance between innovation and risk mitigation.

Consider point solutions – opt for subscription-based AI solutions that offer flexibility and scalability. By choosing point solutions with subscription models, you can minimize upfront costs and easily switch to alternative solutions if needed.

Invest in customization and personalization – if leveraging horizontal tools, allocate resources for customization and personalization to align the solution with your specific needs. While this approach may require higher initial investment compared to off-the-shelf solutions, it offers greater control and adaptability in the long run.

Understand switching costs – recognize the switching costs associated with large-scale AI investments, such as building internal language model infrastructures. Ensure thorough evaluation and validation of such initiatives to mitigate the risk of costly failures.

What is the value of bringing on a generative AI expert?

A Gen AI expert serves as a translator – they are a go-between regarding the complexities of Gen AI technology and the organization’s goals and needs. They provide guidance on selecting the right use cases, tools, and strategies to achieve desired outcomes.

Gen AI technology is constantly evolving – a Gen AI expert dedicates their time to staying updated with the latest advancements, trends, and best practices in the field, ensuring that the organization remains informed and can leverage cutting-edge solutions. It can be difficult to stay up to date with everyday changes. Every organization is unique, with its own challenges, objectives, and resources, and a Gen AI expert can assess the specific requirements of the organization and tailor solutions accordingly. 

Collaboration with external vendors and service providers – a Gen AI expert leverages their network and expertise to identify and engage suitable partners who can assist in the implementation process, whether it’s setting up infrastructure, developing custom models, or providing training and support.

Help to prioritize initiatives, and assess risks and opportunities – beyond the technical aspects, a Gen AI expert contributes to strategic planning and decision-making related to AI experimentation and adoption. Experimenting with AI technologies involves inherent risks, including privacy concerns, security vulnerabilities, and potential failure to deliver expected results. A Gen AI expert helps mitigate these risks by conducting thorough assessments, implementing robust security measures, and monitoring progress throughout the implementation process.

Overall

How is Gen AI likely to change the way that your company operates?

Anticipate a paradigm shift in organizational structure – as businesses adopt Gen AI and similar technologies, they will likely transition to leaner organizational structures with a blend of human agents and AI capabilities. This shift will reshape traditional roles and functions, leading to the emergence of startups and forward-looking companies that fundamentally alter how business processes are executed.

Emergence of a new breed of startups leveraging AI-first principles – forward-looking companies will pioneer innovative approaches to functions like SDR, PMM, and account research, leveraging Gen AI as a foundational element of their operations. These startups will serve as harbingers of a new era in organizational design, characterized by leaner structures and enhanced efficiency through AI integration.

What are the most important things to get right?

Clear objectives and success criteria – before embarking on any AI experiment, companies must define clear objectives and success criteria. Understanding what they aim to achieve and how they will measure success is fundamental to guiding the experiment and assessing its outcomes effectively.

Appropriate use case selection – choosing the right use cases is paramount. Companies should identify use cases that align with their strategic goals, address genuine business needs, and have the potential to deliver significant value. Prioritizing use cases based on feasibility, impact, and resource requirements is essential for maximizing the experiment’s success.

Data quality and accessibility – high-quality data is the lifeblood of AI experimentation. Companies must ensure that they have access to relevant, reliable, and representative data for training and testing their AI models. Data accessibility, cleanliness, and compatibility with chosen AI tools are critical considerations that can significantly impact the experiment’s outcomes. Implement robust security measures and privacy protocols to safeguard confidential information and mitigate risks associated with data breaches or misuse.

Adequate resource allocation and budgeting – companies should allocate sufficient funds, time, and human resources to support the experiment’s execution, monitoring, and evaluation phases. Balancing investment with expected returns and risk tolerance is crucial for optimizing resource utilization and maximizing ROI.

Designing experiments with a focus on iterative learning – companies should adopt agile methodologies and iterative approaches to experiment design, execution, and analysis. Embracing a culture of experimentation, feedback, and adaptation enables organizations to learn from failures and successes alike, driving innovation and progress.

What are common pitfalls?

Lack of innovation and experimentation – many businesses, especially smaller ones, continue with “business as usual” instead of leveraging emerging technologies like Gen AI to optimize their operations. Despite facing challenges in various functions like sales and marketing, there’s a reluctance to explore new approaches or run experiments to improve efficiency and effectiveness.

Failure to adapt to technological advancements – organizations often cling to outdated, manual processes that are neither efficient nor effective. For example, relying on a team of five SDRs or multiple PMMs when a single individual leveraging Gen AI could accomplish the same tasks more efficiently. This failure to embrace technological advancements results in bloated, inefficient structures that hinder agility and competitiveness.

Responses