TOOLKITBuilding the builders

AI-Native Org Evolution.Assess and advance your organization's capability to build, experiment, and learn using AI.

By Rachel Wolan
AI-Native Evolution / Maturity Matrix
7 Dimensions · 5 Levels · A framework for AI-native product orgs
  1. Prototyping & Design Velocity

    1. L1Initial

      Rare prototypes. Weeks to create.

    2. L2Developing

      Some AI prototyping. Hours possible.

    3. L3Defined

      Standard practice. Integrated workflows.

    4. L4Advanced

      High velocity. Measurable impact.

    5. L5Leading

      Core to development. PM/PD submit PRs.

  2. Data & Insights Automation

    1. L1Initial

      Manual analysis. Weeks to insights.

    2. L2Developing

      Early AI tools. Some automation.

    3. L3Defined

      AI integrated. 3-7 day cycles.

    4. L4Advanced

      High automation. 1-3 day cycles.

    5. L5Leading

      AI-native. Reusable frameworks.

  3. Tool Fluency & Infrastructure

    1. L1Initial

      Ad-hoc usage. No standardization.

    2. L2Developing

      Some teams experimenting.

    3. L3Defined

      Standardized toolset. Regular training.

    4. L4Advanced

      High proficiency. Custom integrations.

    5. L5Leading

      Innovation. Internal tool development.

  4. Cross-Functional Builder Culture

    1. L1Initial

      Teams in silos. Limited collaboration.

    2. L2Developing

      Some cross-functional projects.

    3. L3Defined

      Standard workflows. Shared practices.

    4. L4Advanced

      Deep collaboration. Optimized.

    5. L5Leading

      Seamless collaboration. Continuous innovation.

  5. Learning & Experimentation

    1. L1Initial

      Limited experimentation. Siloed.

    2. L2Developing

      Some experimentation. Early sharing.

    3. L3Defined

      Regular events. Systematic learning.

    4. L4Advanced

      Continuous experimentation. Optimized.

    5. L5Leading

      Learning-driven innovation.

  6. Career Ladder

    1. L1Initial

      No AI skills in career frameworks.

    2. L2Developing

      AI mentioned but not defined.

    3. L3Defined

      Clear AI competencies per level.

    4. L4Advanced

      AI fluency tied to advancement.

    5. L5Leading

      AI leadership expected at senior levels.

  7. Interviewing

    1. L1Initial

      No AI-related interview questions.

    2. L2Developing

      Basic AI awareness assessed.

    3. L3Defined

      AI tool proficiency evaluated.

    4. L4Advanced

      AI problem-solving emphasized.

    5. L5Leading

      AI innovation & leadership assessed.

01Overview

The AI Product Org Maturity Model helps product organizations assess and advance their capability to build, experiment, and learn using AI-powered tools. It measures how effectively teams leverage AI to accelerate product development, from initial concept to validated insights.

This maturity model is specifically designed for Product Management, Product Design, Data Science, and User Research teams. Each dimension includes role-specific indicators to help you understand where your organization stands and how to advance.

02Core Dimensions

Seven dimensions determine an organization's ability to build AI-native products. Each is assessed across five maturity levels in the matrix above.

A

Prototyping & Design Velocity

How quickly Product & Design move from idea to interactive prototype.

B

Data & Insights Automation

How effectively Data Science & Research use AI to accelerate analysis.

C

Tool Fluency & Infrastructure

Depth and breadth of AI tool adoption across product teams.

D

Cross-Functional Builder Culture

How well PM, Design, Data, and Research collaborate using AI.

E

Learning & Experimentation

Ability to learn from experiments and improve continuously.

F

Career Ladder

How AI fluency shows up in competencies and advancement.

G

Interviewing

How AI skills are evaluated when hiring into the org.

03Maturity Levels

All dimensions are assessed across five levels. Your organization may be at different levels across different dimensions. This is normal and helps prioritize improvement efforts.

LevelNameDescription
1InitialAd-hoc, individual experimentation
2DevelopingSome teams experimenting, early standardization
3DefinedStandardized practices, integrated workflows
4AdvancedOptimized usage, continuous improvement
5LeadingInnovation and thought leadership

04Dimension 1. Prototyping & Design Velocity.

The question. How fast can a Product or Design team go from a sentence to an interactive prototype?

Time to PrototypeFaster →
L1
1-2 weeks
L2
2-5 days
L3
4-8 hours
L4
1-4 hours
L5
< 1 hour
Coverage% of concepts prototyped
L1
10%
L2
30%
L3
70%
L4
85%
L5
95%
Who Creates Them Design PM / PD
L1
100%
L2
80%
20%
L3
60%
40%
L4
50%
50%
L5
40%
60%
Level 1

Initial

Prototypes are rare, slow, and only Design ships them.

Time
1-2 weeks
Coverage
10%
PM share
0%

<10% of concepts have a prototype.

Level 2

Developing

Pockets of AI tooling. Hours instead of days, sometimes.

Time
2-5 days
Coverage
30%
PM share
20%

PMs creating 10-20% of prototypes.

Level 3

Defined

Standard practice. AI tools in the default workflow.

Time
4-8 hours
Coverage
70%
PM share
40%

Prototypes in every kickoff template.

Level 4

Advanced

High velocity, custom workflows, measured impact.

Time
1-4 hours
Coverage
85%
PM share
50%

Prototype-to-decision time tracked.

Level 5

Leading

PMs and Designers submit PRs. Prototypes become production.

Time
< 1 hour
Coverage
95%
PM share
60%

95%+ prototyped before formal design.

05Dimension 2. Data & Insights Automation.

The question. How much of the research-to-insight pipeline is still done by hand?

Data & Insights PipelineWho still does each stage by hand
Collect
Clean
Explore
Analyze
Synthesize
Ship
Cycle Time
L1Initial
M
M
M
M
M
M
2-4 wk

L1Every stage of the pipeline is done by hand.

L2Developing
A
A
M
M
M
M
10-15 d

L2AI takes over data prep. The rest is still human.

L3Defined
U
U
U
A
A
A
3-7 d

L3Standard workflows. AI assists synthesis.

L4Advanced
U
U
U
U
U
A
1-3 d

L4End-to-end automation. Humans review.

L5Leading
N
N
N
N
N
N
< 1 d

L5AI-native. Reusable frameworks for the rest of the org.

MManualAAssistedUAutomatedNAI-Native

06Dimension 3: Tool Fluency & Infrastructure

Definition: The depth and breadth of AI tool adoption, proficiency, and infrastructure support across Product, Design, Data, and Research teams.

Level 1: Initial

State: Ad-hoc tool usage, no standardization, limited access.

Indicators:

  • <10% of product org uses AI tools regularly
  • No approved tool list
  • No tool budget or procurement process
  • No training programs
  • Tools accessed individually (personal accounts)

Level 2: Developing

State: Some teams experimenting, early standardization beginning.

Indicators:

  • 25-40% adoption in key teams
  • Initial tool evaluation framework
  • Basic training sessions offered (quarterly or ad-hoc)
  • Some tools procured organizationally
  • Tool champions emerging in each discipline

Level 3: Defined

State: Standardized toolset, integrated workflows, regular training.

Product-Specific Tools:

  • AI code editors (Cursor) for prototyping
  • AI design tools (Figma Make) for rapid design
  • AI analysis platforms for data insights

Indicators:

  • 60-75% adoption across product org
  • Standardized tool stack per discipline
  • Quarterly training cadence
  • Tools in standard project kickoff templates
  • Tool usage metrics tracked
  • New team members onboarded to tools

Level 4: Advanced

State: High proficiency, custom integrations, optimized usage.

Indicators:

  • 85%+ adoption with high proficiency
  • Custom workflows and automations
  • Tool ROI measured and reported
  • Internal tool communities and knowledge sharing
  • Advanced training and certification programs
  • Tools integrated with core product infrastructure

Level 5: Leading

State: Innovation, thought leadership, tool development.

Indicators:

  • 95%+ adoption with expert-level proficiency
  • Tool partnerships and co-development
  • Industry speaking/thought leadership
  • Internal tool development projects
  • Open source contributions or tool improvements
  • Tool usage drives competitive advantage

07Dimension 4: Cross-Functional Builder Culture

Definition: How effectively Product Management, Design, Data Science, and User Research collaborate using AI tools to build, experiment, and learn together.

Level 1: Initial

State: Teams work in silos, limited collaboration, no shared AI practices.

Indicators:

  • Teams use different tools and processes
  • Collaboration happens through formal handoffs
  • No shared AI tool knowledge or practices
  • Limited cross-functional prototyping or analysis
  • Silos between PM, Design, Data, Research

Level 2: Developing

State: Some cross-functional collaboration, early shared practices.

Indicators:

  • 25-40% of projects involve cross-functional AI tool usage
  • Some shared tool training sessions
  • Informal communities forming around AI tools
  • PM-Design or Data-Research collaboration increasing
  • Early examples of cross-functional prototypes

Level 3: Defined

State: Standard cross-functional workflows, shared practices, regular collaboration.

Product-Design Collaboration:

  • PMs and Designers co-creating prototypes
  • Shared understanding of prototyping tools and practices
  • Joint exploration of product concepts

Data-Research Collaboration:

  • Data Scientists and Researchers using AI tools together
  • Shared analysis workflows and insights
  • Collaborative research synthesis

Cross-Discipline Collaboration:

  • PM-Data collaboration on insights and analysis
  • Design-Research collaboration on user understanding
  • All disciplines contributing to product decisions using AI tools

Indicators:

  • 60-75% of projects involve cross-functional AI collaboration
  • Standard cross-functional workflows established
  • Quarterly cross-functional training sessions
  • Shared tool documentation and best practices
  • Cross-functional AI tool communities active

Level 4: Advanced

State: Deep collaboration, optimized workflows, measurable impact.

Indicators:

  • 85%+ of projects involve cross-functional AI collaboration
  • Custom workflows for cross-functional patterns
  • Collaboration metrics tracked and improved
  • Cross-functional AI tool expertise across all teams
  • Collaboration drives measurable product outcomes

Level 5: Leading

State: Seamless collaboration, innovation, thought leadership.

Indicators:

  • 95%+ of projects involve cross-functional AI collaboration
  • Thought leadership on cross-functional AI collaboration
  • Custom tools developed for cross-functional needs
  • Collaboration practices shared externally
  • Cross-functional AI usage drives innovation

08Dimension 5: Learning & Experimentation

Definition: The organization's ability to learn from experiments, share knowledge, and continuously improve AI-native practices.

Level 1: Initial

State: Limited experimentation, knowledge siloed, no systematic learning.

Indicators:

  • <10% of teams regularly experiment with AI tools
  • No systematic capture of learnings
  • Limited knowledge sharing about AI tools
  • No experimentation framework or process
  • Learnings not applied to future work

Level 2: Developing

State: Some experimentation, early knowledge sharing, basic learning processes.

Indicators:

  • 25-40% of teams regularly experiment
  • Informal knowledge sharing (Slack, ad-hoc sessions)
  • Basic experimentation frameworks (Builder Day, hackathons)
  • Some documentation of learnings
  • Early communities and champions emerging

Level 3: Defined

State: Regular experimentation, systematic learning, knowledge sharing processes.

Indicators:

  • 60-75% of teams regularly experiment
  • Quarterly experimentation events (e.g., Builder Day)
  • Systematic knowledge sharing (documentation, forums, sessions)
  • Best practices documented and accessible
  • Regular learning reviews and retrospectives
  • Active communities around AI tools

Level 4: Advanced

State: Continuous experimentation, optimized learning, measurable improvement.

Indicators:

  • 85%+ of teams continuously experiment
  • Experimentation integrated into standard workflows
  • Learning metrics tracked and improved
  • Knowledge sharing optimized and automated where possible
  • Learnings drive measurable improvements in practices
  • Innovation cycles accelerated

Level 5: Leading

State: Learning-driven innovation, thought leadership, external sharing.

Indicators:

  • 95%+ of teams continuously experiment and innovate
  • Thought leadership on AI-native learning practices
  • External sharing of learnings and best practices
  • Learning culture recognized externally
  • Experimentation drives measurable competitive advantage
  • Innovation cycles fastest in industry

09Assessment Tool

Use this assessment to identify your organization's current maturity level across each dimension. Your organization may be at different levels across dimensions. This is normal and helps prioritize improvement efforts.

How to Use This Assessment

  1. Assess Each Dimension Independently - Your organization may be at different levels across dimensions
  2. Involve Multiple Perspectives - Get input from PM, Design, Data, and Research teams
  3. Be Honest - The goal is to identify where you are, not where you want to be
  4. Focus on Evidence - Use the indicators to guide your assessment, not just opinions

Scoring Your Assessment

  1. For each dimension, identify the highest level where you answer "Yes" to all questions
  2. Your maturity level for that dimension is the highest level you fully meet
  3. If you meet some criteria for a higher level but not all, note it as "approaching" that level
  4. Create a maturity profile showing your level across all five dimensions

Interpreting Your Results

Uneven Maturity is Normal: Most organizations will be at different levels across dimensions. This is expected and helps prioritize where to focus improvement efforts.

Focus Areas:

  • If Level 1-2 across most dimensions: Focus on foundational tool adoption and training
  • If Level 2-3 across most dimensions: Focus on standardization and workflow integration
  • If Level 3-4 across most dimensions: Focus on optimization and advanced use cases
  • If Level 4-5 across most dimensions: Focus on innovation and thought leadership

Quick Wins:

  • Identify dimensions where you're close to the next level (e.g., Level 2 approaching Level 3)
  • These are good candidates for focused improvement efforts
  • Small investments can yield significant progress

Self-Assessment Questions

For each dimension, review the indicators in the dimension sections above and assess which level best describes your organization. Use the detailed level descriptions to guide your assessment.

Note: A full checklist of assessment questions for each dimension and level is available in the detailed maturity model documentation. Use the dimension sections above as your primary assessment guide.

10Roadmap: Advancing Your Maturity

Use this roadmap to identify specific actions you can take to advance from your current level to the next level in each dimension.

From Level 1 to Level 2

Key Focus: Start experimenting, build early wins

  • Prototyping & Design Velocity: Introduce AI prototyping tools (Figma Make, Cursor) to one product team. Run pilot training session for PMs and Designers. Set goal: Create 3-5 prototypes using AI tools in next quarter.
  • Data & Insights Automation: Pilot AI transcription tools for user research interviews. Experiment with AI code assistants (Cursor) for data analysis. Set goal: Use AI tools for 25% of analysis work in next quarter.
  • Tool Fluency & Infrastructure: Identify tool champions in each discipline. Establish basic tool evaluation process. Procure organizational licenses for 2-3 key tools.
  • Cross-Functional Builder Culture: Run one cross-functional workshop on AI tools. Create shared Slack channel for AI tool discussions. Set goal: One cross-functional project using AI tools.
  • Learning & Experimentation: Plan first Builder Day or hackathon event. Create basic documentation template for learnings. Set goal: Capture learnings from 3-5 experiments.

From Level 2 to Level 3

Key Focus: Standardize practices, integrate workflows

  • Prototyping & Design Velocity: Establish standard prototyping tool stack. Integrate prototypes into project kickoff templates. Set quarterly training cadence. Set goal: 60% of concepts have prototypes, PMs create 30%.
  • Data & Insights Automation: Standardize AI tools for all interviews and analysis. Create standard workflows for AI-assisted analysis. Set goal: 60% of analysis uses AI tools, 3-7 day insight cycle.
  • Tool Fluency & Infrastructure: Finalize approved tool stack per discipline. Establish quarterly training program. Create tool documentation and best practices. Set goal: 60% adoption across product org.
  • Cross-Functional Builder Culture: Establish standard cross-functional workflows. Create shared tool documentation. Set quarterly cross-functional training. Set goal: 60% of projects involve cross-functional AI collaboration.
  • Learning & Experimentation: Establish quarterly Builder Day events. Create systematic knowledge sharing process. Document best practices. Set goal: 60% of teams regularly experiment.

From Level 3 to Level 4

Key Focus: Optimize usage, measure impact

  • Prototyping & Design Velocity: Create custom templates and workflows. Measure prototype-to-decision time. Optimize based on metrics. Set goal: 85% of concepts prototyped, 1-4 hour creation time.
  • Data & Insights Automation: Create custom AI workflows for common patterns. Automate insight surfacing to product teams. Measure analysis velocity. Set goal: 85% of analysis uses AI, 1-3 day insight cycle.
  • Tool Fluency & Infrastructure: Measure and optimize tool ROI. Create custom integrations and workflows. Build internal tool communities. Set goal: 85% adoption with high proficiency.
  • Cross-Functional Builder Culture: Optimize collaboration workflows. Measure collaboration impact on outcomes. Create custom tools for cross-functional work. Set goal: 85% of projects involve cross-functional collaboration.
  • Learning & Experimentation: Integrate experimentation into standard workflows. Measure learning metrics. Optimize knowledge sharing. Set goal: 85% of teams continuously experiment.

From Level 4 to Level 5

Key Focus: Innovate, lead, share externally

  • Prototyping & Design Velocity: Enable PMs and Product Designers to submit pull requests. Train PMs and Product Designers to generate Python notebooks. Create frameworks and templates for code contributions from non-engineering roles. Set goal: PMs and Product Designers contributing code regularly.
  • Data & Insights Automation: Build reusable analysis frameworks and libraries. Create internal tools that democratize data analysis. Enable self-service research analysis tools. Contribute to open source AI analysis or research tools. Set goal: Internal tools used by other teams, open source contributions.
  • All Dimensions: Pioneer new use cases and techniques. Contribute to tool development. Share best practices externally (blog posts, talks). Build thought leadership. Set goal: Industry recognition, competitive advantage.