You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Merge pull request #7 from akashtalole/claude/add-claude-documentation-igODC
Update _config.yml with proper GitHub Pages configuration
- Replace generic Chirpy starter placeholders with actual blog content
- Update tagline and description to reflect AI engineering focus
- Set url/baseurl correctly for GitHub Pages user site
- Remove placeholder email; keep social links (GitHub first as copyright owner)
- Add CLAUDE.md to exclude list so it is not processed by Jekyll
- Add comments to clarify every section for future edits
- Retain future: true and all correct technical settings
https://claude.ai/code/session_01Nfbjr497RyUJTqFxh2VM8d
tagline: Passion for Innovation # it will display as the subtitle
18
+
tagline: Lead AI Engineer — Practical notes on Claude Code, GitHub Copilot, Agentic AI, and AI in the SDLC
20
19
21
-
description: >- # used by seo meta and the atom feed
22
-
A passionate developer with a knack for innovation, dedicated to crafting elegant solutions and pushing the boundaries of technology. With a love for coding and a drive to create impactful projects, I thrive on turning ideas into reality and making a difference in the digital world.
20
+
description: >-
21
+
Practical AI engineering from the trenches. Hands-on notes on Claude Code,
22
+
GitHub Copilot, Microsoft Copilot Studio, agentic AI, coding agents, agent
23
+
skills, and AI in the software development lifecycle — written by a Lead AI
24
+
Engineer with 11 years of industry experience.
23
25
24
-
# Fill in the protocol & hostname for your site.
25
-
# E.g. 'https://username.github.io', note that it does not end with a '/'.
26
+
# GitHub Pages URL — no trailing slash
26
27
url: "https://akashtalole.github.io"
27
28
29
+
# Base URL — empty for user/org sites (username.github.io)
30
+
# Set to /repo-name only for project sites
31
+
baseurl: ""
32
+
28
33
github:
29
-
username: akashtalole# change to your GitHub username
34
+
username: akashtalole
30
35
31
36
twitter:
32
-
username: akashtalole# change to your Twitter username
37
+
username: akashtalole
33
38
34
39
social:
35
-
# Change to your full name.
36
-
# It will be displayed as the default author of the posts and the copyright owner in the Footer
37
40
name: Akash Talole
38
-
email: example@domain.com # change to your email address
39
-
fediverse_handle: # fill in your fediverse handle. E.g. "@username@domain.com"
41
+
email: ""
40
42
links:
41
-
# The first element serves as the copyright owner's link
42
-
- https://twitter.com/akashtalole # change to your Twitter homepage
43
-
- https://github.com/akashtalole # change to your GitHub homepage
44
-
# Uncomment below to add more social links
45
-
# - https://www.facebook.com/username
46
-
# - https://www.linkedin.com/in/username
47
-
48
-
# Site Verification Settings
49
-
webmaster_verifications:
50
-
google: # fill in your Google verification code
51
-
bing: # fill in your Bing verification code
52
-
alexa: # fill in your Alexa verification code
53
-
yandex: # fill in your Yandex verification code
54
-
baidu: # fill in your Baidu verification code
55
-
facebook: # fill in your Facebook verification code
description: "Agentic AI is one of the most overused phrases in tech right now. Here's what it actually means for engineers building real systems — and why the mental model matters more than the buzzword."
8
+
author: akashtalole
9
+
---
10
+
11
+
"Agentic AI" is everywhere right now. Every product announcement uses it. Half the LinkedIn posts about AI use it. Most of them mean something slightly different by it, and a few mean nothing at all.
12
+
13
+
That's a problem, because the concept underneath the buzzword is actually important — not as a marketing term, but as an engineering paradigm. If you're building systems that use AI, or using AI tools to build systems, understanding what "agentic" really means will change how you think about design, failure, and trust.
14
+
15
+
So let's be precise about it.
16
+
17
+
---
18
+
19
+
## The Non-Agentic Baseline
20
+
21
+
Start with what most people actually use AI for today: you give it an input, it gives you an output, you do something with that output. A completion, a code suggestion, a chat response. One round trip.
22
+
23
+
This is AI as a function. `f(input) → output`. You're in control the whole time. You decide what to ask. You decide what to do with the answer. The AI doesn't take any actions in the world — it just produces text.
24
+
25
+
This is still genuinely useful. Most of GitHub Copilot works this way. Most of the Claude Code interactions I do in a day are this. Input → output → I decide what to do next.
26
+
27
+
Agentic AI is different in one fundamental way: **the AI decides what to do next.**
28
+
29
+
---
30
+
31
+
## What "Agentic" Actually Means
32
+
33
+
An agent has a goal, a set of tools it can use, and a loop: observe → reason → act → observe again.
34
+
35
+
Instead of one round trip, you get a process. You give the agent a task — "fix the failing tests in this module" or "find all the places we're not handling null in this service" — and it figures out the steps: what to look at, what to run, what to change, what to check. It doesn't ask for permission between each step. It decides.
36
+
37
+
The three things that distinguish an agent from a plain LLM call:
38
+
39
+
**1. Tools / Actions**
40
+
The agent can actually *do things* — read files, run code, call APIs, write to databases, trigger workflows. It's not just producing text for a human to act on. It's acting directly.
41
+
42
+
**2. A Loop**
43
+
The agent observes the result of each action and decides what to do next based on what it saw. If the test still fails after the first fix, it doesn't stop — it looks at the new error and tries again.
44
+
45
+
**3. Goal-Directed Behaviour**
46
+
You give it an objective, not a single question. It plans toward that objective, adapts when things don't go as expected, and stops when the goal is met (or when it gives up).
47
+
48
+
That's it. Everything else — memory, multi-agent orchestration, specialized tools — is layered on top of this core loop.
49
+
50
+
---
51
+
52
+
## Why This Mental Model Changes Everything
53
+
54
+
The shift from "AI as function" to "AI as process" sounds incremental. It isn't.
55
+
56
+
### Failure Propagates Differently
57
+
58
+
When an LLM call returns bad output, the damage is contained. You see the output, you reject it, nothing happened. When an agent acts on bad reasoning, it may have already written to a file, called an API, or made ten downstream decisions before you see anything wrong.
59
+
60
+
Agents fail in the middle of doing things. That's a different failure mode from tools you're used to. It requires thinking about what "undo" looks like, what the blast radius of a wrong action is, and where you need checkpoints.
61
+
62
+
### Observability Is Harder
63
+
64
+
With a function call, the trace is simple: input, output, done. With an agent, you have a sequence of reasoning steps, tool calls, intermediate observations, and decisions — many of which may not surface unless you explicitly log them.
65
+
66
+
I've spent more time building observability into agentic systems than almost anything else. Not because the agents are opaque by nature, but because the default level of visibility isn't enough to debug them when something goes wrong. And something always goes wrong eventually.
67
+
68
+
### Testing Is Fundamentally Different
69
+
70
+
You can unit test a function. What do you unit test in an agent? The individual tool calls? The planning step? The end-to-end behaviour given a particular goal?
71
+
72
+
In practice, agentic systems need evaluation frameworks, not just test suites. You're testing probabilistic behaviour over a distribution of inputs, not deterministic outputs for specific inputs. This is one of the things that's genuinely hard about agentic AI right now, and anyone who tells you they've fully solved it is probably selling something.
73
+
74
+
### The Action Space Is a Design Decision
75
+
76
+
When you're building an agent, one of the most important things you decide is what tools it has access to. This isn't just a capability question — it's a risk question.
77
+
78
+
An agent that can read files is less risky than one that can write them. One that can write files is less risky than one that can execute code. One that can execute code is less risky than one that can call external APIs with side effects.
79
+
80
+
Every tool you give an agent is a decision about what it can get wrong, and how badly. Scope the action space to what the task actually requires. Don't give it access to systems it doesn't need for this task. This sounds obvious and is frequently ignored.
81
+
82
+
---
83
+
84
+
## What This Looks Like in Practice
85
+
86
+
I'll give you two examples from work I'm doing right now.
87
+
88
+
**Coding Agent**
89
+
A coding agent that can read the codebase, run tests, write code changes, and re-run tests. The goal: fix a failing test suite. The loop: read the error → reason about the cause → make a change → run tests → observe new output → repeat.
90
+
91
+
The action space is intentionally limited: read files, write files, run tests. It can't commit, can't push, can't touch production. The blast radius of a mistake is bounded. I can inspect every step it took before I decide whether to accept the changes.
92
+
93
+
**Copilot Studio Multi-Agent**
94
+
A customer-facing agent that can look up account information, check order status, and escalate to a human. Behind it, specialist agents handle different domains. The orchestrator decides which agent to route to based on intent.
95
+
96
+
Here the human-in-the-loop is at the end: every action that has side effects (like modifying an order) goes through an approval step before it's committed. The agents can gather information and reason about it freely. They can't take irreversible actions autonomously.
97
+
98
+
Same principle in both cases: the loop is autonomous, but the *consequences* of the loop are bounded and visible.
99
+
100
+
---
101
+
102
+
## The Mental Model in One Sentence
103
+
104
+
An agent is an LLM with a goal, tools to act on the world, and a loop that keeps going until the goal is met or it gives up — and your job as the engineer is to decide what it can do, what it can see, and where a human needs to stay in the loop.
105
+
106
+
Get that right and agentic AI is genuinely powerful. Get it wrong and you have a system that confidently does the wrong thing with no easy way to stop it.
107
+
108
+
---
109
+
110
+
*Day 3 of the [30-Day AI Engineering series](/posts/30-day-ai-engineering-blog-plan/). Previous: [AI in the SDLC — The Honest State of Things in 2026](/posts/ai-in-the-sdlc-the-honest-state-of-things-in-2026/).*
0 commit comments