Create an organization where humans and AI agents collaborate seamlessly
to deliver faster, higher-quality, and more adaptive outcomes — anchored by five
non-negotiable principles.
01
Human-in-the-loop by designHumans remain in control of consequential decisions.
02
Augmentation over replacementAI elevates people; it does not substitute them.
03
Iterative deploymentStart small, validate fast, scale intentionally.
04
Governance & safety firstRisk, privacy, and compliance lead every rollout.
05
Data-driven improvementMeasure everything; optimize continuously.
Strategic Domains
Four interconnected domains that together define our full transformation agenda
AI Product Experiences
Build AI-native experiences that drive engagement, conversion, and customer satisfaction.
Everyday AI
Enable every employee to use AI as a daily productivity tool — not just builders.
AI-Powered PDLC
AI enhances every phase of product development within existing squad operating model. Humans drive; AI accelerates.
Agentic Development
AI as a collaborator and actor. AI executes meaningful work; humans define intent and orchestrate outcomes.
| Level |
Name |
People |
Process |
Tools |
| L1 |
Exploring |
Early adopters; informal champions |
Pilots and experimentation |
Experiment with tools approved by Y! |
| L2 |
Establishing |
Normalize human + AI collab through knowledge sharing; role & responsibility gaps identified |
Adapt practices and workflows based on learnings; identify guardrails needed |
Approved toolset active; essential integrations in place |
| L3 |
Scaling |
Org-wide adoption; AI competency expectations |
Practices standardized; teams self-sufficient |
Tools scaled across teams; deeply integrated into workflows |
| L4 |
Optimizing |
Continuous upskilling; AI-native culture |
Self-improving loops; advanced governance |
Tooling continuously optimized; embedded across all workflows |
| Domain |
Q1 2026 |
Q2 2026 |
Q3 2026 |
Q4 2026 |
| AI Product Experiences |
L2 · Establishing |
L2 · Establishing |
L3 · Scaling |
L3 · Scaling |
| Everyday AI |
L1 · Exploring |
L2 · Establishing |
L3 · Scaling |
L3 · Scaling |
| AI-Powered PDLC |
L1 · Exploring |
L2 · Establishing |
L2 · Establishing |
L3 · Scaling |
| Agentic Development |
L1 · Exploring |
L1 · Exploring |
L2 · Establishing |
L2 · Establishing |
🌐
AI Product Experiences
Two dedicated squads are shipping AI-powered experiences to production, with instrumentation and OKRs in place. A shared design paradigm has not yet emerged — patterns, best practices, and standards are still forming across teams. Q2 focus is on completing shared infrastructure and establishing the foundation for every product squad to build AI experiences consistently.
Maturity Trajectory
Q1 2026
L2 · Establishing
Q2 2026
L2 · Establishing
Q2 2026
- Complete Scout-as-a-Service integration; on track for near-term completion
- Ship AI Starter Kit v1.0 to all squads: MCP connectivity, knowledge base integrations, observability tooling, evaluation platform, and standardized patterns
- Onboard all product squads onto the AI Starter Kit by end of Q2
- Mature continuous evaluation loop: automated accuracy scoring + human review expectations
- Establish a design paradigm for AI-powered product experiences: define principles, interaction patterns, and standards that will guide how all squads approach AI feature design
- Establish practice for knowledge and pattern sharing across squads building AI experiences; establish working agreements between these squads and the AI Foundation squad
Q3 2026
- All product squads onboarded onto the AI Starter Kit and capable of building AI-powered features; adoption will vary based on squad priorities and roadmap
- Launch next wave of AI-powered features across all products using Scout-as-a-Service infrastructure
- Close continuous learning loop: user interaction signals feed back into model improvement pipeline
- Leverage established KPIs to assess achievement of scale
Q4 2026
- Optimize based on Q3 scale-up learnings; address squad-level friction and quality gaps
- Expand learning loop: broaden signal sources across more products and user cohorts
- Report full-year outcomes
- Define 2027 Optimization roadmap: tech optimization, cross-product AI experience coherence
💼
Everyday AI
Core AI tools are already in employees' hands. The focus now is adoption depth, proficiency, and embedding AI into daily workflows across every function.
Maturity Trajectory
Q2 2026
L2 · Establishing
Q2 2026
- GitHub Copilot, ChatGPT Enterprise, Claude Code, Google Gemini, and NotebookLM are all available org-wide; Atlassian MCP integration is in place — Q2 focus shifts to adoption depth and proficiency
- AI Evangelists publish curated function-specific guidance: right tool for the right job; cut through noise in an increasingly crowded tooling landscape
- Complete tool evaluations and procurement for high-impact platforms; communicate a defined core AI tool stack to all functions
- Leverage available MCP integrations to increase context availability and ground AI outputs in internal knowledge
- Publish curated use case library covering everyday productivity scenarios: meeting summaries and follow-ups, Slack triage, research synthesis, document creation, and simple personal workflow automation
- Establish usage baselines across the existing tool stack: daily active users, self-reported time savings, and adoption rates by function
Q3 2026
- Drive org-wide active usage of the full AI tool stack; ensure every function has role-appropriate guidance and at least one validated high-value workflow
- Ensure AI learning resources and opportunities are easy to find and act on; the expectation is that individuals are proactive — the org's role is to surface the right resources, create the right opportunities, and remove the barriers to self-directed growth
- Continue expanding the use case library based on Q2 learnings and emerging patterns from across the org
- Normalize human + AI collaboration through visible leadership modeling, internal story-sharing, and team retrospectives
- Track weekly active users; target 70%+ active usage across the org; surface and address adoption gaps per function
Q4 2026
- Integrate AI adoption and competency into performance review cycles; establish AI-assisted work as the expected standard, not the exception
- Recognize top AI adopters and workflow innovations; share impact stories across the org to reinforce culture shift
- Measure full-year productivity gains by function; target 15–20% reduction in time spent on routine tasks
- Assess tool stack consolidation opportunities; deprecate low-adoption tools and reinvest in highest-value platforms
- Define 2027 Optimization roadmap: deeper personal workflow automation, intelligent communication and meeting tools, cross-function AI integration
⚙️
AI-Powered PDLC
AI as a tool within existing squad structures — humans drive, AI accelerates across every phase of the lifecycle. Universally adoptable, lower risk, immediate ROI.
Maturity Trajectory
Q2 2026
L2 · Establishing
Q3 2026
L2 · Establishing
Q2 2026
- Map PDLC friction points across all phases and introduce AI-augmented workflows across discovery, requirements, design, engineering, QA, and delivery — AI Evangelists to lead
- Discovery & Requirements: activate AI tools for research synthesis, customer insight clustering, PRD drafting, and backlog generation in pilot squad(s)
- Design & Prototyping: establish AI-assisted design as a core practice — faster iteration on flows, wireframes, and UI explorations; reduce time from concept to testable prototype
- Engineering & QA: introduce AI-assisted code review, test generation, and bug triage in pilot squad(s)
- Adapt workflows based on pilot learnings; identify guardrails needed per PDLC phase — entry/exit criteria, acceptable use boundaries, and AI review checkpoints
- Capture PDLC baseline metrics — cycle time, defect escape rate, review turnaround — and publish first AI-augmented workflow documentation
- Identify roles & responsibilities gaps created by AI-augmented workflows; surface findings for leadership review
Q3 2026
- Expand AI-augmented PDLC to all squads; capture comparative cycle time and quality data
- Launch continuous validation framework: automated regression + AI-powered QA agents
- Run retrospectives/surveys on AI tool adoption; close tooling gaps and refine workflows based on learnings
- Reduce unnecessary handoffs by redesigning workflows around AI-augmented outputs
- Publish refined PDLC workflow templates and AI tool integration guides in internal wiki
Q4 2026
- Standardize AI-augmented PDLC as the default process across all product teams — full scale achieved
- Squads become self-sufficient in their AI-augmented workflows; AI Evangelists holds standards and supports cross-team consistency
- Report full-year results: cycle time reduction, defect rate improvement, and team satisfaction scores
- Integrate PDLC AI metrics into executive dashboard for ongoing visibility
- Identify highest-value opportunities for 2027 Optimization: multi-agent QA pipelines, autonomous requirement generation
🤖
Agentic Development
AI as a collaborator and actor — AI executes meaningful portions of work, humans orchestrate. In H1 2026, this domain is intentionally selective and experimental as we build the infrastructure, governance, and system readiness needed for broader adoption.
Maturity Trajectory
Q3 2026
L2 · Establishing
Q4 2026
L2 · Establishing
Dual Operating Model
Squads - Current Model
Serve as the delivery baseline and comparison group. Preserve continuity for critical business functions.
Pods — Agentic Model
Operate on agentic principles: rapid iteration, human + AI collaboration, reduced handoffs, semi-autonomous execution. Innovation environments defining the future-state SDLC.
Q2 2026
- Quotes Squad (Kiro Pilot): introduce AI-assisted capabilities within the existing Squad PDLC; evaluate Kiro tooling to augment workflows without going fully agentic
- AlphaSpace Pod (Agentic Pilot): stand up a fully agentic Pod in a greenfield 0→1 environment with AI agents across code generation, testing, docs, and orchestration
- Mobile Pod (Agentic Pilot): form a dedicated mobile pod to execute a greenfield mobile build using agentic development practices; serves as our first agentic pilot in the mobile surface area
- Nimbus frontend migration from Svelte to React: a necessary Q2 precursor that aligns the frontend stack with broader ecosystem tooling and reduces instability
- Launch readiness assessment across all product surfaces and platform services; score each system on a 1–5 scale and track scores in the software catalog as the source of truth
- Select foundational models and agent orchestration plan, evaluated against security, compliance, scalability, and integration criteria
- Close Q2 with documented early agentic artifacts from pilot learnings: prompt libraries, agent task definitions, and "Do/Don't" patterns — the foundation of the Q3 starter pack
Q3 2026
- Form 2 additional Pods informed by Q2 lessons learned; Pod formation requires identification of team members who have demonstrated AI proficiency, product surfaces and services that are conducive to agentic development, and high-value initiatives that align to both with sufficient innovation tolerance
- Deploy approved core tech stack to agentic pods: LLM, orchestration, and workflow automation; implement agent memory architecture and launch RAG pipeline for internal knowledge management
- Enter Q3 with a starter pack of early lessons, tooling choices, and working patterns distilled from Q2 pilot experience; distribute to all active pods as a shared foundation to build from
- Evolve the starter pack over the course of Q3 into a living agentic playbook, shaped by the collective experiences of all 3 pods — covering what works, what doesn't, tooling guidance, team structure, and guardrails
Q4 2026
- Identify and form 2 additional agentic pods to join the cohort in Q4, informed by readiness scores, demonstrated AI proficiency, and playbook learnings from Q2/Q3
- Publish Agentic Development Playbook v1.0, informed by all Q2/Q3 Pod learnings
- Finalize approved enterprise AI tool stack; deprecate shadow and unapproved tools; establish essential integrations across key business systems
- Implement cross-system audit trail for all agent-assisted decisions; conduct data privacy review across all active agent workflows
- Report quantified ROI: velocity, quality, cost, and developer experience — including risks observed (hallucination, drift, coordination failures)
- Publish Q4 readiness score report; define which systems will be elevated to readiness 4–5 in 2027
- Design 2027 scaling roadmap: phased expansion gated by readiness scores; guardrails defined for each readiness tier
The AI Evangelists are a small cross-functional team — PM, Engineer, Designer, and TPM — operating as a
collective of motivated enthusiasts. They are not AI experts or a delivery team. Their role is to
curate relevant AI signal, reduce noise, and keep teams informed so that adoption
feels approachable rather than overwhelming.
What They Do
- → Curate and filter AI tools, news, and use cases
- → Publish a weekly AI newsletter for the organization
- → Share practical, actionable guidance — not hype
- → Surface emerging patterns and experiments for awareness
Building Momentum
- → Actively collect real AI experiences and wins from across the org
- → Amplify stories of teams using AI effectively to inspire others
- → Make progress visible — turning individual experiments into shared proof points
- → Help the org see itself as capable, not just aspirational
Operating Principles:
Lead by Doing
Curate, Don't Overwhelm
Enable, Not Gatekeep
Stay Pragmatic
Continuously Learn
| Domain / Area |
KPI Focus |
Q2 Baseline Target |
Q3 Progress Target |
Q4 Outcome Target |
| AI Product Experiences |
OKRs & Engagement |
Scout-as-a-Service complete; AI Starter Kit v1 shipped; design paradigm established |
All squads capable of building AI features; OKRs tracking positive trend |
Full rollout; OKR outcomes reported |
| AI-Powered PDLC |
Cycle Time & Defect Rate |
Baseline metrics captured across PDLC phases; pilot squad(s) active |
All squads expanded; measurable cycle time improvements tracked |
AI-augmented PDLC standardized across all squads; full-year results reported |
| Agentic Development |
Pod Velocity, Quality & ROI |
Q2 pilots live (Quotes Squad + AlphaSpace Pod + Mobile Pod); success metrics instrumented |
3 pods active; agentic playbook in progress; starter pack distributed |
5 pods active; Agentic Playbook v1.0 published; quantified ROI reported; 2027 roadmap defined |
| Everyday AI |
Adoption Rate & Time Saved |
Core tool stack defined; use case library published; usage baselines established |
70%+ weekly active users; use case library expanded |
15–20% routine task time reduction; tool stack optimized |
| AI Evangelists |
Newsletter Reach & Org Engagement |
Newsletter launched; consistent weekly cadence established |
Readership growing; content reflecting real org experiments and learnings |
Sustained cadence; recognized as a go-to source for AI signal across the org |
| Risk |
Level |
Mitigation |
| Incorrect or hallucinated AI outputs |
High |
Human validation checkpoints at all decision-critical steps; continuous evaluation loop with automated accuracy scoring in place; mitigate at the source by continuously improving the quality of context provided to models |
| Data leakage or privacy violation |
High |
Access controls, data classification, DLP policies, and governance framework in place before any tool deployment or pilot expansion |
| Low adoption / change resistance |
High |
AI champions per function; executive sponsorship and visible modeling; emphasis on discoverability of self-serve resources and opportunities rather than mandated programs |
| Over-automation / loss of human judgment |
Medium |
Human-in-the-loop principle enforced by design; escalation paths defined for all agentic workflows |
| Code quality degradation from agents |
Medium |
Mandatory automated test coverage; AI-generated code reviewed in CI/CD pipeline before merge; addressed in agentic playbook |
| Tool sprawl / shadow AI usage |
Medium |
Core AI tool stack defined and communicated; acceptable use policy enforced; AI Evangelists provide clear guidance on approved tools |
| Agentic pod learnings not captured or shared |
Medium |
Success metrics instrumented before pilots begin; starter pack and living agentic playbook are explicit Q3 deliverables; AI Evangelists responsible for surfacing learnings org-wide |
| Pace exceeding governance readiness |
Low |
Governance framework established before any pilot expansion; agentic work gated by readiness scores, not timeline pressure |