Inside the Box, Man with Machine
2025/06/01
Can you imagine a color you haven’t seen before? Or a primary sense, just like hearing or seeing is, but is as different to them as hearing is to seeing?
Look at “high-fantasy” literature, for example. Since its beginning in the 19th century, all “fantastic” notions are remasters of what already exists: sometimes it’s an amalgam (e.g. a tree that speaks), or a malformation (e.g. an eye with no body), or a derivation.
Recombinations. Clever, and sometimes surprising and pleasant and
amusing.
And, definitely creative.
Original, though? Not really. Not in the sense that it’s “never been seen before.”
Disclaimer: This piece isn’t a lament, it’s a curiosity.
There’s this limit to what we can conceive of. To what we can imagine. Almost as if there’s a wall that is both invisible and yet very real — it’s tangible when you stretch your mind long enough to hit it.
I think it’s more like a box. We can feel its edges when we hit them. We seem to be closer to one edge of it; the one you can easily trigger with silly questions like “can you think of a new color?”
It’s not like we can leap out of this box. We can’t. Nor can LLMs. But we get to experience the pressure of its walls in a way LLMs don’t. That tension may be our only edge.
This box exists as a universal phenomenon that is indifferent to either time or you or me; intellect, age, wisdom — nothing has an effect on it.
The box exists as an entity.
—
It’s been bothering me for a while in my daily interaction with LLMs. There’s a nasty side-effect to delegating to an LLM parts of the problem-solving process that traditionally I’d have carried out myself. This side-effect grows to be more acute the more I trust it with the outcome.
I want to call it a reasoning tax.
“It CANNOT be done, because, listen, here’s the spec, and here are 3 reasons why it cannot be done. Forget it. Pick from what I propose instead.” -ChatGPT
In response to my suggestion to blend a few background images a certain way using pure CSS without JavaScript, ChatGPT insisted in more than one way that - while reasonable as a premise - it could not be done.
But here’s the thing: it could be done, and my reasoning was entirely fine. Evidence of that is the fact that browser vendors have implemented support for what I was proposing, although the documentation is a little lacking.
The part that bothers me is, what I would’ve done had I been a little less stubborn, or perhaps not as versed in this domain? Or, and here’s the nugget, if I eventually grow complacent and trust-by-default what the LLM tells me?
Obviously, I would not have even come up with a solution that is not within the LLM’s design space. Putting aside the waste of effort in following one of the solutions it proposed, this has a rather massive implication that eclipses all talk of waste and effort:
How will the design space grow?
I will not attempt to answer. I think it’s very important for us all to think about this, and to take it seriously.
—
Anyway, this is what I’m now calling the reasoning tax of LLMs being part of my daily workflow. It’s the loss of “tensional reasoning” – that friction that leads to breakthroughs.
Since this last occurrence, I’ve been more principled in my approach: I restored the tradition, and only added a little to it. Once again, I explore the design space myself. Then, and only then, do I query an LLM either for an expansion to that space, or for a deeper dive into some part of it.
—
So what’s with the prelude talking about original thoughts? It’s not fully clear in my mind yet, but there’s something to inference that I (we?) must be always cognizant of. I think of it this way:
Imagine you have a set of square lego pieces and you want to form a triangle.
When you ask the LLM to build you the triangle, it will arrange the square pieces in a way to form a triangle. If it knows that triangular lego pieces exist, it might suggest that you buy one instead.
When you ask me to build you the triangle, I might just look for a triangle lego piece, or build one myself if it doesn’t exist, or arrange the square pieces in a way to form the triangle for you.
Maybe I don’t build the triangle at all.
Maybe I ask: why do we need a triangle in the first place?
I don’t know how to articulate it any better yet. But in the same sense that we as humans seem to have limits to our imagination and an inability to conceive of original thought, LLMs lack the ability to say:
“What if?”
Not the kind of “what if?” that answers a question; LLMs can
do that when prompted; they can generate counterfactuals and
hypotheticals.
It’s the kind of “what if?” that makes you challenge assumptions you
didn’t know you had, or had made so long ago that you now take to be a
given.
—
If I ask, “what if gravity reversed every Tuesday?”, I’m not just playing with physics. I’m playing with the frame that says physics is stable. LLMs might simulate that move. But do they feel the weight of the rule being bent?
—
Here’s an example of how unreasonable thinking — a “what if?” — led to an actual design shift.
A web application that relays short stories expected the user to
transition from its entrance through 3 screens to then read their daily
story.
I asked, “so what if I lead with the story?“
The exercise upset the entire flow and flipped it on its head.
In the process, I managed to surface assumptions I wasn’t intentional about, and account for them. And while I didn’t actually adopt the reversed-flow, the original one improved all the same from the exercise.
—
What we have in common is that we’re both bound by the “Invisible Box”. You can argue the semantics; humans employ primitives, LLMs embed parameters; humans have “intellect” to compose ideas, LLMs emulate it via inference. That’s not it. There’s something within the Box that humans can do and LLMs can’t. This is my suspicion, but I need more to substantiate it.
I’d like to identify it so that I make sure I don’t lose it, especially if it’s truly irreplaceable as I now suspect it is.
The dynamic between human and machine is still a long way from being
truly understood.
I think of it more like a duet.
Our scene is the design space, and our stage is the Invisible Box.
To be true to my part, I must play to my strongest suit.
What exactly IS my strong suit?
An LLM exceeds the human in capacity, but the human exceeds it in its
ability to…
Be unreasonable?
It sounds crazy, but maybe part of what allows us to reason the way we
do is our ability to be unreasonable.
Unreasonable thoughts expand the design space not because they’re right, but because they shatter constraints we forgot were optional.
Or, perhaps it’s the combination of “imagination” and ability to reason that defines our intelligence? Whatever imagination means, anyway.
…
All I know is that I’ve seen glimpses of it, and that it’s
fundamental in nature.
Maybe you’ve seen it too?