Developer’s Guide: The State of AI Global Survey 2025
This is a developer-focused personal analysis of McKinsey’s November 2025 “The State of AI” survey.
⚠️ DISCLAIMER
This report was generated by Claude Code analyzing McKinsey’s November 2025 State of AI survey report for myself and I thought I could share it. This is NOT an official McKinsey publication—it’s my personal analysis created by working with AI to extract developer-relevant insights from their research. This is my personal research and learning method -especially for big reports like this- and I thought I could share this insights I captured from this report on my blog.
All McKinsey data is clearly marked with 📊. Everything else is interpretation, context, and technical translation.
What This Is
This is a developer-focused analysis of McKinsey’s November 2025 “State of AI” survey report. Instead of wading through 52 pages of business-speak, I used Claude Code to extract what actually matters for software engineers and translate McKinsey’s findings into actionable technical insights.
The big question I asked: “From a developer’s perspective, what does McKinsey’s State of AI 2025 report actually mean for my career and daily work?”
What you’ll find here:
📊 McKinsey’s data (1,993 respondents, 105 countries) on AI adoption, scaling challenges, and what separates high performers
Technical translations of business concepts into engineering reality
Real-world context from 2025 job market data, Big Tech layoffs, and actual implementation patterns
Actionable guidance on what skills to learn, what questions to ask, and how to position yourself
Not interested in the developer angle? Read the Executive Summary for the business leadership perspective on the same report.
How This Was Created
I loaded all seven McKinsey State of AI reports (2020-2025) into Claude Code and analyzed them with my pdf reader mcp to identify trends, shifts, and patterns over time. This isn’t just about the November 2025 report—it’s about understanding how McKinsey’s perspective on AI has evolved and what that means for developers.
Questions I asked Claude Code:
“How has the definition of ‘high performers’ changed from 2021 to 2025?”
“If AI will create more job opportunities in tech, why are the
re Big Tech layoffs?”
“What trends changed between 2023, 2024, and 2025?”
The result is this analysis—combining McKinsey’s multi-year survey research with technical context, market data, trend analysis, and engineering best practices.
Visual markers in this document:
📊 = McKinsey data (directly from their November 2025 report)
Paragraphs with orange lines = External context (Claude’s analysis, web searches, technical interpretation, real-world examples)
📊 McKinsey’s Current Position (November 2025)
A paragraph from the general analysis document I generated.
The Realistic Assessment:
📊 From McKinsey’s November 2025 report:
“Most organizations are still navigating the transition from experimentation to scaled deployment.”
“While AI tools are now commonplace, most organizations have not yet embedded them deeply enough into their workflows and processes to realize material enterprise-level benefits.”
“The transition from pilots to scaled impact remaining a work in progress at most organizations.”
The Path Forward:
📊 McKinsey’s recommended practices:
Think transformatively, not incrementally
Redesign workflows fundamentally
Pursue innovation and growth, not just efficiency
Invest heavily and track ROI rigorously
Ensure C-suite ownership and commitment
Follow comprehensive best practices
Build organizational capabilities, not just deploy technology
The Ultimate Message:
📊 From McKinsey’s November 2025 report:
“As AI tools, including agents, improve and companies’ capabilities mature, the opportunity to embed AI more fully into the enterprise will offer organizations new ways to capture value and create competitive advantage.”
📊 Synthesis: The journey from 2023’s “breakout year” to 2025’s “agents, innovation, and transformation” reflects McKinsey’s view evolving from technological excitement to organizational realism - recognizing that AI’s promise remains ahead, but achieving it requires fundamental business transformation, not just technology adoption.
TL;DR for Developers
The Bottom Line: Your company is probably using AI (88% are), but they’re likely stuck in pilot hell (68% haven’t scaled). The ones succeeding aren’t just adding AI to existing code—they’re fundamentally redesigning workflows and building with AI agents. This is a systems architecture challenge, not just an API integration problem.
Three Critical Insights:
AI Agents are the new frontier (62% experimenting, 23% scaling)
Workflow redesign > Tool adoption (2.8x gap between high/low performers)
Your job is changing, not disappearing (IT/dev functions seeing headcount increases)
1. AI Agents: The Technical Shift You Need to Understand
What McKinsey Defines as “AI Agents”
📊 McKinsey’s Definition: “AI agents are systems based on foundation models that can act in the real world. Unlike a gen AI chatbot or copilot, which is largely reactive, an agentic solution can plan and execute multiple steps in a workflow.”
This information is from LLM, external sources
Translation for Developers:
Traditional Gen AI (2023-2024): AI Agents (2025+): ├─ User prompt → AI response ├─ Goal → Multi-step planning ├─ Single-turn interaction ├─ Multi-turn autonomous execution ├─ No state persistence ├─ State management & context retention ├─ Human-driven workflow ├─ AI-driven workflow orchestration └─ Example: ChatGPT, Copilot prompts └─ Example: AutoGPT, agent frameworks
Adoption Reality Check
📊 Current State (McKinsey Nov 2025):
62% of organizations are experimenting with or piloting AI agents
23% are scaling agents somewhere in their enterprise
Industries leading: Technology (24%), Media/Telecom (21%), Healthcare (18%)
Where Agents Are Being Deployed:
IT and knowledge management (most common)
Service operations
Software engineering ← You’re here
Product development
The Reality vs. The Hype
📊 McKinsey’s Realistic Assessment (Michael Chui):
“When it comes to agents, it takes hard work to do it well.”
This information is from LLM, external sources
What This Means for You:
Building production-ready agents is NOT just using LangChain or AutoGPT
You need robust error handling, fallback mechanisms, and human oversight
Most implementations are still exploratory (not production-critical)
The technical challenges are real: state management, reliability, cost control
2. The Scaling Gap: Why Most AI Projects Fail to Deploy
Where Organizations Are Stuck
📊 McKinsey Data (Nov 2025):
Experimentation: 31% ← Still testing, PoC phase
Piloting: 30% ← Limited production, single team/use case
Scaling: 25% ← Multiple teams, cross-functional deployment
Fully Scaled: 7% ← Enterprise-wide, integrated into core systems
────────────────────
STUCK (not scaling): 68%
This information is from LLM, external sources
Developer Translation:
Experimenting = Jupyter notebooks, side projects, hackathons
Piloting = One team using it, hardcoded configs, manual processes
Scaling = Multi-team adoption, CI/CD integration, monitoring
Fully Scaled = Platform-level integration, automated ops, org-wide access
Why Technical Teams Get Stuck
📊 Why Companies Get Stuck (from McKinsey):
Incremental thinking: Use-case-by-use-case approach creates technical debt
Efficiency-only objectives: Cost focus limits organizational energy
No workflow redesign: Adding AI to broken processes doesn’t transform outcomes
IT ownership: Delegating to IT instead of a CEO-led transformation
Incomplete execution: Following some best practices, but not all 6 dimensions
This information is from LLM, external sources
Technical Translation:
What this means for developers and technical teams:
“Incremental thinking creates technical debt” translates to:
Adding AI to existing systems without redesigning architecture
Building one-off solutions instead of reusable platforms
Result: Fragile integrations, maintenance nightmares
“No workflow redesign” translates to:
Just adding AI endpoints to existing code
Not rethinking the entire system architecture
Missing opportunities for AI-native design patterns
“IT ownership without business context” translates to:
Missing MLOps pipelines and AI infrastructure
No monitoring/observability for AI systems
Building what’s asked, not what drives business value
Disconnect between technical capability and impact
The High Performer Difference (Technical Practices)
📊 What the top 6% do differently:
PracticeHigh PerformersOthersGapTechnology infrastructure allowing latest tech implementation60%22%2.7xIterative solution development with established improvement processes54%23%2.3xHuman-in-the-loop processes clearly defined65%24%2.7xWorkflow redesign embedding AI into business processes58%20%2.9x
This information is from LLM, external sources
Translation:
They build platforms, not one-off solutions
They have CI/CD for AI models, not manual deployments
They define when humans validate outputs, not ad-hoc checking
They redesign the system, not just add AI endpoints
3. Workflow Redesign: The #1 Technical Success Factor
What McKinsey Found
📊 Key Finding:
“Out of 31 variables tested, workflow redesign has one of the strongest contributions to achieving meaningful business impact.”
Statistics:
Only 21% of all organizations have fundamentally redesigned workflows
55% of high performers redesigned workflows vs 20% of others (2.8x gap)
This is the highest correlation with EBIT impact across all factors tested
What “Fundamental Redesign” Means for Developers
This information is from LLM, external sources
❌ NOT Workflow Redesign (Adding AI to existing process):
# Before: Manual customer support ticket handling def handle_ticket(ticket): assign_to_human(ticket) human_resolves_ticket(ticket) # After: Adding AI to existing workflow def handle_ticket(ticket): ai_suggests_response(ticket) # ← Just added AI assign_to_human(ticket) # ← Same old process human_resolves_ticket(ticket)✅ Workflow Redesign (Rearchitecting around AI capabilities):
# Redesigned: AI-first with human oversight def handle_ticket(ticket): # AI handles entire workflow severity = ai_classify_severity(ticket) if severity == “low”: # AI resolves autonomously response = ai_generate_resolution(ticket) ai_send_response(response) human_review_sample(response, probability=0.1) elif severity == “medium”: # AI drafts, human approves draft = ai_generate_resolution(ticket) human_approves_and_sends(draft) else: # high severity # Human-led with AI assistance context = ai_gather_context(ticket) assign_to_specialist(ticket, context) ai_monitor_and_suggest(ticket)The Architectural Difference:
Before: Linear process, AI as optional helper
After: Branching logic based on AI capabilities, human-in-the-loop as checkpoints
Real-World Implications for Your Architecture
What changes:
Role transformation:
Developers: From building features → Building AI-enabled systems
Users: From doers → Overseers/validators
New system requirements:
Confidence scoring for AI outputs
Audit trails for AI decisions
Fallback mechanisms when AI fails
Human escalation paths
Feedback loops for model improvement
Infrastructure needs:
Real-time model inference at scale
A/B testing infrastructure for AI variants
Monitoring for AI-specific failures (hallucinations, bias, drift)
Cost tracking per AI call (tokens, compute)
4. Your Job: Changing, Not Disappearing
The Nuanced Reality
📊 McKinsey’s Workforce Data (Nov 2025):
32% expect workforce decreases of 3%+ in the next year
43% expect little to no change
13% expect increases of 3%+
Function-Level Breakdown:
🟢 Software Engineering/Dev → Headcount likely to INCREASE
🟢 IT → Headcount likely to INCREASE
🟢 Product/Service Development → Headcount likely to INCREASE
🔴 Service Operations → Headcount likely to decrease
🔴 Supply Chain/Inventory → Headcount likely to decrease
📊 McKinsey Quote (Lareina Yee):
“Even in these early days of adoption, we are seeing changes in the skills demanded for a range of jobs.”
The Big Tech Reality: What’s Actually Happening in November 2025
This information is from LLM, external sources
While McKinsey’s survey data shows IT/dev functions expecting headcount increases overall, the 2025 reality for Big Tech companies is more complex:
Big Tech Layoffs (2025 Data):
178,635 tech workers laid off in 2025 across 606 layoff events
627 tech workers losing jobs every day in AI-driven restructuring
Major cuts: Amazon (14K), Microsoft (9K), Intel (25K), IBM (8K), Salesforce (2.5-5K)
Over 17,000 jobs explicitly attributed to AI, another 20,000 to automation
BUT: Tech Jobs ARE Migrating to Non-Tech Industries:
Large non-tech companies are absorbing tech talent:
Walmart: +5,000 tech workers hired in 2025
JP Morgan Chase: 55,000 technology employees total
United Health: +10,000 tech workers over past decade
Goldman Sachs, Citizens Financial: Active hiring sprees
What This Means for You:
Big Tech (FAANG) is contracting and using “AI efficiency” as rationale
Large traditional companies (finance, retail, healthcare) are hiring tech talent
Tech jobs spreading from Big Tech to non-tech Fortune 500 companies
Small/mid companies (<$500M revenue) still face talent constraints
Key Insight: McKinsey’s data showing “larger companies hiring AI talent at 2x rate” refers to large NON-TECH companies, not Big Tech. The democratization of tech jobs is real, but it’s migrating to traditional industries becoming software-enabled, not to startups.
What This Means for Software Engineers
This information is from LLM, external sources
Your role is evolving, and where you work may change too.
Old Job Description (2023):
Write code to implement features
Debug and fix issues
Deploy applications
Maintain systems
New Job Description (2025+):
Design AI-enabled systems architecture
Define human-AI collaboration patterns
Build AI observability/monitoring
Implement safety guardrails
Optimize AI workflows for cost & performance
Use AI to augment your own productivity (Copilot, etc.)
Where to Look for Opportunities
This information is from LLM, external sources
Based on 2025 hiring trends, consider these sectors:
High Growth Sectors for Tech Talent:
Financial Services - JP Morgan (55K tech employees), Goldman Sachs, Citizens Financial
Retail/E-commerce - Walmart (+5K in 2025), Target, other large retailers
Healthcare - United Health (+10K over decade), insurance companies, health tech
Traditional Enterprise - Fortune 500 companies building software capabilities
Lower Growth/Higher Risk:
Big Tech (FAANG) - Significant layoffs despite selective AI hiring
Startups (<$500M revenue) - Resource constraints, limited AI hiring
Education Pipeline Shift:
Cornell CS graduates going to finance increased from 16% → 22% (since 2022)
Carnegie Mellon (Heinz College): Finance placements rose from 16% → 19%
Students choosing finance/healthcare/retail over Big Tech for stability
New Roles Emerging (High Demand)
📊 From McKinsey March 2025 Report:
“Respondents at larger companies are more likely than their peers at smaller organizations
to report hiring a broad range of AI-related roles, with the largest gaps seen in hiring:
• AI data scientists
• Machine learning engineers
• Data engineers”
This information is from LLM, external sources
Translation: These roles are in HIGH demand:
ML Engineers - Building and deploying models
Data Engineers - Building pipelines for AI training data
MLOps Specialists - CI/CD for AI systems
AI Product Managers - Defining AI-enabled products
AI Safety/Compliance Engineers - Ensuring responsible AI use
Reality Check on “AI replacing developers”:
Yale Budget Lab research: Only 1% of service firms reported AI as reason for layoffs (down from 10% in 2024)
AI may be a convenient excuse rather than primary driver of tech layoffs
Amazon CEO admitted layoffs were “not even really AI driven”
Real driver: ~$1 trillion in AI infrastructure spending forcing cost-cutting elsewhere
This information is from LLM, external sources
Skills to Develop Now
Technical Skills:
Understanding of LLM APIs and prompt engineering
Agent frameworks (LangChain, AutoGPT, CrewAI)
Vector databases (Pinecone, Weaviate, Chroma)
Model evaluation and monitoring
Cost optimization for AI systems
System Design Skills:
Designing for human-in-the-loop
Building feedback mechanisms
Failure mode analysis for AI
State management for multi-step agents
Scalability patterns for AI workloads
Business Skills:
Understanding ROI of AI features
Identifying high-impact use cases
Communicating AI limitations to stakeholders
Balancing automation with human oversight
5. Investment & Resource Realities
The Resource Gap
📊 Digital Budget Allocation to AI (McKinsey Nov 2025):
MetricHigh PerformersOthersGapSpend >20% of digital budget on AI35%10%3.5xSpend >11% of digital budget on AI55%25%2.2xSpend ≤5% of digital budget on AI6%44%0.14x
This information is from LLM, external sources
What This Means for Your Team:
If your company is treating AI as a 5% side project, you’re in the “others” category. High performers are making AI a core budget priority.
Questions to Ask Your Leadership:
What % of our engineering budget is allocated to AI initiatives?
Are we building platforms or one-off solutions?
Do we have dedicated headcount for AI infrastructure?
What’s our 3-year AI roadmap?
Company Size Matters (A Lot)
📊 McKinsey Data: Large vs Small Companies
Large Companies ($5B+ revenue):
47% in scaling phase
2x more likely to hire specialized AI roles
Can afford comprehensive AI infrastructure
Have resources for dedicated AI teams
Small/Mid Companies (<$500M revenue):
29% in scaling phase
Limited specialized hiring
Must be scrappier with resources
Often rely on external expertise
This information is from LLM, external sources
Implication for Developers:
At large companies: Expect specialized roles, bigger teams, more structure
At small/mid companies: Expect to wear multiple hats, use managed services, prioritize ruthlessly
6. Technical Risks You’ll Need to Handle
The Risk Landscape
📊 McKinsey Data (Nov 2025):
51% of organizations experienced at least one negative consequence from AI
Top consequences: Inaccuracy (30%), cybersecurity issues, IP infringement
Organizations are mitigating more: Average of 4 risks mitigated (up from 2 in 2022)
This information is from LLM, external sources
What This Means for Your Code
You need to build for these failure modes:
Inaccuracy / Hallucinations (30% experienced this)
# Don’t just trust the output response = llm.generate(prompt) # Add verification layers if is_factual_claim(response): verified = fact_check(response) if not verified: flag_for_human_review(response)
Cybersecurity Issues
Prompt injection attacks
Data leakage through model outputs
Unauthorized access via AI interfaces
Your responsibility:
Input sanitization for prompts
Output filtering for sensitive data
Access controls on AI endpoints
IP Infringement
Models trained on copyrighted data
Outputs that reproduce training data
Your responsibility:
Document model training data sources
Implement plagiarism detection
Have legal review of model outputs (especially for public-facing features)
Human Oversight Patterns
📊 McKinsey Data:
27% review ALL gen AI outputs before use
27% review ≤20% of outputs
Industries with highest oversight: Business, legal, professional services
This information is from LLM, external sources
Design Pattern: Confidence-Based Review
def handle_ai_output(input_data): result = ai_model.predict(input_data) confidence = result.confidence_score if confidence > 0.95: # Auto-approve for high confidence return auto_execute(result) elif confidence > 0.70: # Sample review for medium confidence if random.random() < 0.20: # 20% review rate return queue_for_review(result) return auto_execute(result) else: # Always review for low confidence return queue_for_review(result)
This information is from LLM, external sources
7. The Path Forward: What to Do Monday Morning
Immediate Actions (This Week)
Audit your current AI usage:
What AI tools is your team using? (Copilot, ChatGPT, Claude?)
Are they sanctioned or shadow IT?
What % of your workflow includes AI?
Assess your scaling phase:
Experimenting? (Just testing, POCs)
Piloting? (One team, limited production)
Scaling? (Multiple teams, real users)
Fully scaled? (Integrated into core systems)
Identify one workflow to redesign:
Don’t just add AI to existing process
Ask: “If we built this from scratch with AI-first, what would it look like?”
Start small but think transformatively
Short-Term (This Quarter)
Skill Up:
Take a course on LLM APIs (OpenAI, Anthropic, local models)
Build a small agent that does multi-step tasks
Experiment with vector databases
Learn prompt engineering beyond basic chat
Propose Infrastructure Improvements:
Monitoring for AI costs (token usage)
A/B testing framework for AI features
Feedback collection mechanism
Human review workflow
Document Your AI Usage:
What models are you using?
What prompts/configurations?
What are the failure modes?
How do you handle errors?
Long-Term (This Year)
Position Yourself as AI-Native:
Be the person who understands both traditional software AND AI
Learn to explain AI capabilities/limitations to non-technical stakeholders
Contribute to your org’s AI strategy discussions
Build Platform Thinking:
Stop building one-off AI integrations
Design reusable components (prompt templates, agent frameworks, monitoring)
Create internal tools that let others leverage AI
Stay Ahead of the Curve:
Follow AI agent frameworks development
Track production AI case studies
Join communities (r/LocalLLaMA, AI engineering Slack groups)
Contribute to open source AI tools
This information is from LLM, external sources
8. Critical Questions to Ask Your Organization
Strategy Questions
“What’s our AI vision beyond cost savings?”
📊 High performers set growth/innovation goals (80% vs 50%)
If your company only talks about efficiency, that’s a red flag
“Are we redesigning workflows or just adding AI to existing processes?”
📊 Workflow redesign is the #1 success factor
If you’re just wrapping AI around old processes, you’ll struggle to scale
“Who owns AI strategy—IT or the C-suite?”
📊 Sukharevsky: “Delegating to IT is a recipe for failure”
CEO-led initiatives are 3x more successful
Technical Questions
“Do we have MLOps infrastructure?”
CI/CD for models?
Model versioning?
Monitoring and alerting?
“What’s our human-in-the-loop policy?”
📊 65% of high performers have this clearly defined vs 24% of others
If undefined, you’re building on shaky ground
“How do we track AI costs and ROI?”
📊 High performers track well-defined KPIs (52% vs 13%)
Token costs can spiral quickly without tracking
Career Questions
“What AI skills are we hiring for?”
Are we building internal capabilities or outsourcing?
What’s the career path for AI-focused engineers?
“What % of engineering time is spent on AI projects?”
If <10%, AI is a side project
If >30%, AI is a strategic priority
9. Common Developer Misconceptions (Corrected)
Misconception #1: “AI will replace developers”
📊 Reality (McKinsey): IT and product development functions are increasing headcount, not decreasing.
This information is from LLM, external sources
The 2025 Market Reality:
Big Tech laid off 178,635 workers in 2025 (often citing “AI efficiency”)
However, only 1% of firms actually report AI as layoff reason (Yale research)
Tech jobs ARE migrating: From Big Tech → Finance/Retail/Healthcare
Large non-tech companies (Walmart, JP Morgan, United Health) hiring thousands
What’s actually happening:
Junior developers using AI become more productive (less junior work needed)
Senior developers focus on system design, AI integration, oversight (more senior work needed)
New roles emerge (ML engineers, MLOps, AI safety)
Job market shifting: Big Tech contracting, traditional industries expanding tech teams
Misconception #2: “We just need to add ChatGPT API and we’re done”
📊 Reality (McKinsey): Only 32% of organizations are scaling AI despite 88% adoption.
This information is from LLM, external sources
Why adding an API isn’t enough:
No monitoring/observability
No cost controls
No human oversight patterns
No workflow redesign
No feedback loops for improvement
Misconception #3: “AI agents will work autonomously right away”
📊 Reality (Michael Chui, McKinsey): “When it comes to agents, it takes hard work to do it well.”
This information is from LLM, external sources
The hard parts:
Error handling when agents get stuck
State management across multi-step workflows
Cost control (agents can burn through API credits)
Defining when to escalate to humans
Building trust with users
Misconception #4: “Smaller companies will hire more AI engineers”
📊 Reality (McKinsey): Large companies are hiring AI talent at 2x the rate of smaller companies.
This information is from LLM, external sources
Why:
Large companies have bigger budgets for specialized roles
They can afford comprehensive AI infrastructure
They’re further along in scaling (47% vs 29%)
Misconception #5: “High performers just move faster”
📊 Reality (McKinsey): High performers are 3.6x more ambitious, not just faster.
This information is from LLM, external sources
Difference:
Others: Add AI to 10 existing processes (incremental)
High performers: Redesign the entire business model around AI (transformative)
10. The Developer’s Reality Check
What McKinsey’s Data Really Tells Us
The Harsh Truth:
68% of companies are stuck in pilots
61% see no enterprise-level EBIT impact
79% haven’t fundamentally redesigned workflows
Most AI projects fail to scale
But also:
The 6% who succeed follow clear patterns
Workflow redesign is THE differentiator (2.8x gap)
Developer/IT roles are growing, not shrinking
AI agents are the next frontier (62% experimenting)
This information is from LLM, external sources
What This Means for Your Career
Short-term (1-2 years):
Learn AI tooling (APIs, agents, vector DBs)
Build AI-augmented features
Understand AI limitations and failure modes
Medium-term (3-5 years):
Master AI system design
Become expert in human-AI collaboration patterns
Lead AI infrastructure initiatives
Understand business impact (not just technical capability)
Long-term (5+ years):
Be the bridge between traditional software and AI-native systems
Design entirely new workflows around AI capabilities
Lead transformative (not incremental) AI initiatives
The Mindset Shift Required
❌ Old Developer Mindset:
“I write code that implements business logic.”✅ New AI-Native Developer Mindset:
“I design systems where AI and humans collaborate, with clear fallback patterns, monitoring, and continuous improvement loops.”
Final Takeaway: Think Transformatively, Not Incrementally
📊 McKinsey’s Core Message (Alex Singla):
“It pays to think big. The organizations that are building a genuine and lasting competitive advantage from their AI efforts are the ones that are thinking in terms of wholesale transformative change that stands to alter their business models, cost structures, and revenue streams—rather than proceeding incrementally.”
This information is from LLM, external sources
For Developers, This Means:
Don’t just:
Add AI to your existing code
Use Copilot to write boilerplate faster
Build one-off AI features
Instead:
Redesign your architecture around AI capabilities
Build platforms that enable AI-first workflows
Create systems where AI and humans collaborate effectively
Think about what’s possible if AI handles 80% of the work
The companies winning are not moving incrementally. Neither should you.
Report Citation:
McKinsey & Company (November 2025). “The state of AI in 2025: Agents, innovation, and transformation.” McKinsey Global Survey, 1,993 participants across 105 nations. Survey fielded June 25 - July 29, 2025.
Authors: Alex Singla, Alexander Sukharevsky, Lareina Yee, Michael Chui, Bryce Hall, Tara Balakrishnan (QuantumBlack, AI by McKinsey)
Explore All Analyses
Browse the full analysis folder: github.com/hancengiz/research_reports/tree/main/2-analysis
This folder contains all my ongoing conversations with Claude Code as I work to understand McKinsey’s reports and other industry research. New analyses are added as I explore different angles and questions.
Main Repository
Full repository: github.com/hancengiz/research_reports
Contains all source PDFs, text extractions, and the framework to analyze them yourself with Claude Code.


