AI tools can accelerate support engineering. They can draft commands, summarize logs, propose scripts, and generate troubleshooting steps in seconds. But unlike software development, support engineering operates directly in production, often under pressure, and without the safety nets of code review or automated testing.
That’s why AI-generated commands must be treated with caution. The engineer is the only buffer between the customer and a potentially destructive action.
AI Can Produce Commands That Look Right — But Are Wrong
AI is excellent at producing plausible commands. A single incorrect flag, path, or assumption can cause outages or data loss.
Here are real-world examples of how subtle mistakes can slip in.
Example 1: The rsync Trap — Copying Contents Instead of the Folder
A common AI mistake is generating an rsync command that copies the contents of a folder instead of the folder itself.
AI-generated (wrong)
1
rsync -av /source/folder/ /target/location/
This copies the contents of /source/folder into /target/location/, flattening the structure. If the engineer expected /target/location/folder/, this is a silent, destructive surprise.
It may not be destructive, but it could be a bad reflection on the TSE.
In a PR-reviewed environment, someone would catch this. In support engineering, you are the reviewer.
This is why it’s critical that TSE check and recheck instructions sent over to customers
Example 2: Missing Safety Flags in Destructive Commands
AI often suggests commands that work, but lack guardrails. For example, bash scripts without proper guardrails such as the following so errors would result in a fail fast behavior.
1
set -euo pipefail
These are common safeguards SRE’s use.
The Engineer Remains the Safety Layer
AI can accelerate support engineering, but it cannot replace the engineer’s responsibility to ensure correctness. Customers trust you, not the model. Your judgment, caution, and understanding are what keep environments safe.