AI Case Studies

by Christopher Dickherber

Exploring the intersections of human intuition and machine logic through real-world challenges.

 

The Poster That Forgot the Fifth S

How a simple task exposed a gap between AI confidence and real-world reliability

Context

I was creating a poster using ChatGPT and DALL·E to visually represent a five-part productivity framework for artists. The goal was to keep my exact wording intact in a visually appealing format.

Problem

The AI-generated poster looked nice-but consistently dropped or repeated one of the five listed items without warning. I realized the image tool wasn't capable of maintaining exact text fidelity, and ChatGPT hadn't warned me about that risk.

Insight

The tool couldn't verify whether it preserved all required information. Worse, it offered no transparency. This showed a need for better guidance at the point of request.

Fix

I built the layout in Word to keep control of the text, then layered it over a generated background. This hybrid method worked-but more importantly, it taught me how the system could have prevented the error up front.

Broader Lesson

AI tools need fail-safes when precision matters. Users shouldn't have to discover the limitations the hard way.


 

The Helpful AI That Couldn't See:

Diagnosing a hidden problem when none of my apps could retrieve data 

Context

During a power outage, my apps weren't retrieving information, but I could still log in-so some data was flowing. I asked ChatGPT what was wrong.

Problem

The AI offered guesses about app settings or software issues, but ignored the real possibility: my connection was degraded due to environmental factors. It didn't know what it didn't know-and didn't say so.

Insight

This was less about knowledge and more about perspective. The AI lacked situational awareness but acted like it had all the facts.

Fix

I diagnosed the issue myself by checking service maps and testing other devices. What I really needed was for the AI to admit its limitations and invite a different kind of thinking.

Broader Lesson

When systems can't see your environment, they should say so. A humble AI is a helpful AI.

When AI Doesn't Listen Mid-Sentence:

Understanding why conversational AI struggles to interrupt or adapt in real time

Context

I was experimenting with more natural, stream-of-consciousness prompts. I'd pause, revise myself mid-sentence, and expect the AI to follow along-just like a human would.

Problem

The AI jumped in too soon or misunderstood where I was going. It couldn't sense that I wasn't done thinking. Even when I asked it to wait, it didn't always listen.

Insight

The issue wasn't intelligence-it was rhythm. The model expects a complete prompt, not something emotionally layered or mid-thought. For neurodivergent users, this creates friction.

Fix

I modified my phrasing and also tried voice-driven AI platforms with better pacing. Still, the experience showed a larger need for attentiveness in human-AI conversation design.

Broader Lesson

Good AI isn't just accurate-it's attuned. Future models should learn how to listen, wait, and follow thought patterns with more nuance.