Same Prompt. Different Answers Every Time. Here's How I Fixed It.
This is Part 3 of our AI verification series. Part 1: Three AIs analyzed our product. None passed the truth filter → Part 2: Human in the loop doesn't scale. Human at the edge does. → Same prompt. ...

Source: DEV Community
This is Part 3 of our AI verification series. Part 1: Three AIs analyzed our product. None passed the truth filter → Part 2: Human in the loop doesn't scale. Human at the edge does. → Same prompt. Same AI. Different sessions. Different outputs. Post 1 showed three different AIs diverging on the same question. That's expected. Different training, different weights, different answers. But we didn't stop there. We re-ran the same AI on the same prompt in a new session. We got materially different outputs again. Both looked authoritative. Neither warned us they disagreed with each other. What the same AI said twice Prompt: "Forecast Korea's AI industry in 2027." Session 1 produced: Market size: $10–15B at >25% CAGR Global positioning: "Global AI G3 powerhouse" Hardware claim: "All Korean electronics AI-native by 2027" — sourced to a single company's roadmap Session 2 produced: Market size: KRW 4.46T (~$3.3B) at 14.3% CAGR Global positioning: "Top three AI powers" — framed as government