Skip to content

Commit b332631

Browse files
authored
Merge pull request #8 from akashtalole/claude/add-claude-documentation-igODC
Add CLAUDE.md, 30-day blog plan, Days 1–5, and GitHub Pages config fixes
2 parents 9b9fd45 + 4e0df12 commit b332631

27 files changed

Lines changed: 3515 additions & 1 deletion

File tree

_config.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -130,7 +130,7 @@ pwa:
130130
# ---------------------------------------------------------------------------
131131
# Pagination & URLs
132132
# ---------------------------------------------------------------------------
133-
future: true # Always render posts regardless of build-time vs. post date
133+
future: false # Only publish posts whose date is on or before the build date
134134
paginate: 10
135135

136136
# ---------------------------------------------------------------------------
Lines changed: 130 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,130 @@
1+
---
2+
layout: post
3+
title: "Choosing Your AI Toolchain — Claude Code, Copilot, or Copilot Studio?"
4+
date: 2026-04-14
5+
categories: [ai, meta]
6+
tags: [claude-code, github-copilot, copilot-studio, agentic-ai, sdlc, coding-agents]
7+
description: "Claude Code, GitHub Copilot, and Microsoft Copilot Studio are not competitors. They solve different problems at different layers. Here's the mental model I use to decide which to reach for — and when to use all three."
8+
author: akashtalole
9+
---
10+
11+
Every time I mention using multiple AI tools at work, someone asks the same question: "Which one should I actually use?"
12+
13+
The framing is wrong. These tools aren't alternatives — they're layers. Choosing between them is like asking whether you should use an IDE, a CI pipeline, or a cloud platform. The answer is yes, for different things, at different times.
14+
15+
But the layers aren't obvious until you've spent real time with all three. So let me break down how I actually think about this.
16+
17+
---
18+
19+
## The Three Tools and What They're Actually For
20+
21+
### GitHub Copilot — Your In-Editor Coding Companion
22+
23+
Copilot lives where you write code. It's inline, it's fast, and it's designed to keep you in flow. When you're writing a function and you know what you want but don't want to type all of it — that's Copilot. When you're in a file and want to ask a quick question about the code in front of you without switching context — that's Copilot Chat.
24+
25+
The key characteristic: **it works with what's in your editor right now**. It sees your current file, your open tabs, your recent edits. Its context window is your immediate working surface.
26+
27+
Best for:
28+
- Autocomplete and inline code generation as you type
29+
- Quick questions about the code you're looking at
30+
- Generating test stubs, docstrings, boilerplate
31+
- Small refactors within a file or a few files
32+
- Staying in flow without context-switching to a browser
33+
34+
The limitation: it's shallow on depth. Copilot is excellent for the thing you're working on right now. It's not the right tool for "help me understand how this entire service fits together" or "let's redesign this module from scratch."
35+
36+
### Claude Code — Your Deep Technical Collaborator
37+
38+
Claude Code is a different kind of interaction. It's not inline — you have a conversation. You can give it complex multi-part problems, large amounts of context, and ask it to reason across multiple files and concerns at once.
39+
40+
The key characteristic: **it works best when the problem requires sustained reasoning**. Understanding a legacy codebase, planning a refactor, working through a design decision, debugging something with multiple possible causes — these are Claude Code's home territory.
41+
42+
Best for:
43+
- Exploring and understanding unfamiliar codebases
44+
- Complex multi-step coding tasks
45+
- Code review and analysis across multiple files
46+
- Architecture and design discussions
47+
- Long-running pair programming sessions where you want to build context over time
48+
- Enterprise security and compliance contexts where you need auditability
49+
50+
The limitation: it's a deliberate conversation, not a background assistant. You have to context-switch to use it. For quick in-editor completions, it's slower than Copilot.
51+
52+
### Microsoft Copilot Studio — Your Agent Builder
53+
54+
Copilot Studio is different from both. You're not using it to write code — you're using it to build AI-powered products and workflows. It's a platform for creating agents that end users or enterprise systems interact with.
55+
56+
The key characteristic: **it's for building, not using**. When I need an agent that handles customer queries, routes between specialist agents, connects to enterprise data sources, and triggers business workflows — that's a Copilot Studio project. The output is a deployed agent, not code I write.
57+
58+
Best for:
59+
- Building conversational AI for end users
60+
- Creating multi-agent orchestration across enterprise systems
61+
- Connecting AI to line-of-business data (SharePoint, Dataverse, CRMs)
62+
- Low-code/pro-code hybrid agent development
63+
- Deploying agents into Microsoft 365 and Teams
64+
65+
The limitation: it's a platform with its own opinions. If your use case doesn't fit Microsoft's ecosystem or you need deep custom logic, you'll hit the edges of what it does easily. It's powerful within its lane, less flexible outside it.
66+
67+
---
68+
69+
## How They Work Together
70+
71+
Here's a real example from my current work to make this concrete.
72+
73+
I'm building a multi-agent solution for enterprise customer support using Copilot Studio. The orchestrator agent routes queries to specialist agents — one for order management, one for account queries, one for escalation.
74+
75+
Here's how all three tools show up in that project:
76+
77+
**Claude Code** — I use it to design the agent architecture. When I'm deciding how to structure the handoff between agents, what context each agent needs, how to handle failure cases — I'm having that conversation in Claude Code. I also use it to understand existing API documentation and write the custom connector code.
78+
79+
**GitHub Copilot** — When I'm writing the actual connector code or the Power Automate flows in VS Code, Copilot is handling the in-editor generation. The boilerplate, the JSON structures, the method signatures — Copilot completes those as I type.
80+
81+
**Copilot Studio** — The agents themselves, the topics, the orchestration, the connections to enterprise data — that all lives in Copilot Studio. It's where the product gets built and deployed.
82+
83+
Three tools, one project, no overlap in what each one does.
84+
85+
---
86+
87+
## The Decision Framework
88+
89+
When I'm about to reach for an AI tool, I ask these questions:
90+
91+
**1. Am I writing code right now in my editor?**
92+
→ Yes: Copilot
93+
94+
**2. Do I need to reason deeply about a complex problem — design, architecture, understanding, review?**
95+
→ Yes: Claude Code
96+
97+
**3. Am I building something an end user or enterprise system will interact with?**
98+
→ Yes: Copilot Studio
99+
100+
**4. Is this a new kind of problem I haven't solved before?**
101+
→ Start with Claude Code to think it through, then move to Copilot for implementation
102+
103+
**5. Do I need to connect AI to enterprise data and workflows without building everything from scratch?**
104+
→ Copilot Studio, augmented with Claude Code for the custom logic
105+
106+
The short version: **Copilot for flow, Claude Code for depth, Copilot Studio for products.**
107+
108+
---
109+
110+
## The Mistake I See Most Often
111+
112+
Engineers pick one tool and try to make it do everything.
113+
114+
They use only Copilot and wonder why it can't help with complex architectural decisions. Or they use only Claude Code and find themselves context-switching for every small completion. Or they look at Copilot Studio and try to use it as a code editor.
115+
116+
Each tool is well-designed for its layer. The skill is recognising which layer you're working at — and reaching for the right tool.
117+
118+
That gets easier with practice. After a few weeks of using all three intentionally, the decision becomes automatic. You stop thinking about the tools and start thinking about the problem.
119+
120+
---
121+
122+
## Where This Series Goes From Here
123+
124+
The next four weeks of posts go deep on each tool in turn. Arc 2 starts tomorrow with Claude Code — enterprise setup, real use cases, and what it takes to roll it out responsibly to a team.
125+
126+
If you've only been using one of these tools, I'd encourage you to try the others with deliberate intent this week. Not to evaluate them against each other — to find the layer where each one fits naturally.
127+
128+
---
129+
130+
*Day 5 of the [30-Day AI Engineering series](/posts/30-day-ai-engineering-blog-plan/). Previous: [Agent Skills 101](/posts/agent-skills-101-building-blocks-of-useful-ai-agents/).*
Lines changed: 100 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,100 @@
1+
---
2+
layout: post
3+
title: "11 Years In, AI-Augmented — How My Workflow Actually Changed"
4+
date: 2026-04-15
5+
categories: [ai, meta]
6+
tags: [claude-code, github-copilot, sdlc, ai-in-sdlc, agentic-ai]
7+
description: "An honest before-and-after. Eleven years of engineering, eighteen months of AI tooling — here's what actually changed in how I plan, code, review, and ship. And what didn't."
8+
author: akashtalole
9+
---
10+
11+
I want to close Arc 1 of this series with something personal: an honest account of what actually changed in my day-to-day work when I started using AI tools seriously. Not the marketing version. The real version — including the parts that didn't change as much as I expected.
12+
13+
---
14+
15+
## Before I Had These Tools
16+
17+
Let me paint the picture accurately. Before I integrated AI tools into my workflow, I was already a reasonably efficient senior engineer. I had patterns that worked. I knew how to read a codebase, design a system, debug a hard problem. Eleven years of that builds real muscle memory.
18+
19+
The friction wasn't incompetence — it was the unavoidable tax on certain kinds of work:
20+
21+
- Understanding an unfamiliar codebase took days of reading, running, and dead ends
22+
- Writing documentation felt like paying a tax with no immediate return
23+
- The gap between "I have an idea" and "I have running code to validate it" involved a lot of typing I wasn't thinking about
24+
- Code review caught what reviewers had time to check, not everything that should be checked
25+
26+
These weren't problems I was failing at. They were just the cost of doing the work.
27+
28+
---
29+
30+
## What Changed: The Honest Accounting
31+
32+
### Planning and Exploration — Significantly Faster
33+
34+
This is where I see the biggest delta. Before, when I picked up an unfamiliar piece of work — a new codebase, a new API, a problem domain I hadn't worked in — I'd spend a lot of time just orienting. Reading docs, running code, building a mental map.
35+
36+
Now I use Claude Code as an interactive guide through that orienting phase. I paste in code, ask questions, test hypotheses. The process is still the same — read, run, understand — but the iteration speed is dramatically higher.
37+
38+
What used to take a day often takes a morning. That compounds.
39+
40+
### Writing Code — Incrementally Better, Not Transformatively
41+
42+
This is where people expect the most change and often experience the most disappointment if they're already strong engineers.
43+
44+
Copilot makes the typing faster. It's good at boilerplate, repetitive patterns, filling out function signatures I already know the shape of. For a senior engineer who types fast and knows what they want, the acceleration is real but not enormous.
45+
46+
Where I feel it more is in unfamiliar territory — a language I use occasionally, a library I haven't worked with recently, a framework pattern I need to look up. The cost of those situations dropped significantly. I reach for documentation less and stay in the editor more.
47+
48+
### Code Review — Meaningfully Better
49+
50+
I now run AI pre-review on my own code before submitting PRs and on others' code before starting detailed review. It catches enough low-hanging fruit — inconsistent error handling, obvious edge cases, style issues, missing validations — that human review time is genuinely more focused.
51+
52+
My PRs are tighter before they go up. Other people's PRs take less of my attention on the mechanical stuff so I can focus on the architecture and logic questions that actually need a human.
53+
54+
### Documentation — The Biggest Relative Improvement
55+
56+
I wrote documentation before. Not as much as I should have. The effort-to-reward ratio felt wrong and I'd often defer it.
57+
58+
Now I write significantly more documentation than I did two years ago. Not because I suddenly care more — because the cost is low enough that I actually do it. Docstrings, ADRs, README sections, runbook entries — I draft them in seconds, edit them in minutes. The activation energy is gone.
59+
60+
That's a bigger change than it sounds. Documentation debt is one of the most consistently painful things in software teams and it's almost entirely an incentive problem, not a knowledge problem.
61+
62+
### Debugging — Mixed Results
63+
64+
This is the one that surprised me. I expected AI to be a debugging superpower. It's more nuanced than that.
65+
66+
For certain classes of bugs — logic errors with clear symptoms, type errors, missing null checks — Claude Code is excellent. Paste the error, paste the relevant code, get a likely cause.
67+
68+
For the hard bugs — the ones that involve subtle interactions between distributed systems, race conditions, environment-specific behaviour, business logic that's wrong in a way that's hard to describe — AI is helpful as a thinking partner but rarely diagnoses the root cause directly. You still need to do the detective work.
69+
70+
### Architecture and Design — Mostly Unchanged
71+
72+
My process for designing systems is largely the same as it was before. AI is useful for quickly generating options and poking holes in reasoning, but the hard work of understanding tradeoffs in context — your team, your constraints, your existing system — is still human work.
73+
74+
I use Claude Code to pressure-test ideas and generate alternatives I might not have considered. I don't use it to make the decisions.
75+
76+
---
77+
78+
## What Didn't Change
79+
80+
**Judgment.** Knowing what the right problem is, whether a proposed solution is actually right for the context, what matters and what doesn't — none of that came from AI.
81+
82+
**Understanding the business.** The context that lives in people's heads, in meetings, in three years of history — AI doesn't have it and can't substitute for building it.
83+
84+
**The hard conversations.** Technical leadership, navigating disagreements, deciding what to build and what to cut — still entirely human.
85+
86+
**Debugging the truly hard things.** Concurrency bugs, distributed system failures, environment mysteries — still detective work.
87+
88+
---
89+
90+
## The Honest Summary
91+
92+
The AI tools I use daily have made me faster at a meaningful subset of my work. Not at all of it. The parts that got faster are real, and the compounding effect over months is significant.
93+
94+
But the things that separate a good senior engineer from an average one — judgment, understanding, communication, system thinking — those are unchanged. If anything, they matter more now, because the baseline productivity has risen and the differentiator has shifted further toward the things AI doesn't do.
95+
96+
That's the honest picture after eighteen months. Starting tomorrow, Arc 2 gets specific: Claude Code for enterprise teams.
97+
98+
---
99+
100+
*Day 6 of the [30-Day AI Engineering series](/posts/30-day-ai-engineering-blog-plan/). Previous: [Choosing Your AI Toolchain](/posts/choosing-your-ai-toolchain-claude-code-copilot-or-copilot-studio/).*

0 commit comments

Comments
 (0)