What happens to our collective intelligence, on a long-term timeline, if we allow the robots to do our thinking for us?

Many people I follow online have already suggested standards are slipping across organizations who rely on AI. I don’t know anything about that; I work for myself and I have impeccable standards. (If anything, my standards are increasing over time, which is as it should be, probably.) 

That being said, to use artificial intelligence in your work is to lower your standards by at least a little bit. It’s a way of saying, yes, the plagiarism machine that is essentially a word-by-word prediction algorithm can do this part of my work, and there is no intrinsic value in me doing it.” (Artificial intelligence is naturally good at summarizing long text and making it shorter, because that is quite literally what it is designed to be good at. So it’s not a terrible way to start research, or get help shortening your lengthy email to the C‑suite, who are all probably summarizing your email with AI anyway. But when it comes to actually doing original work and thinking, artificial intelligence tends to be much less predictable and nowhere near the same level of quality. It is, after all, making it up as it goes.)

But I was reading Clear Thinking by Shane Parrish, and he says something very interesting on page 79

Most of the time when we accept substandard work from ourselves, it’s because we don’t really care about it. We tell ourselves it’s good enough, or the best we can manage given our time constraints. But the truth is, at least in this particular thing, we’re not committed to excellence.

When we accept substandard work from others, it’s for the same reason: we’re not all in. When you’re committed to excellence, you don’t let anyone on your team half-ass it. You set the bar, you set it high, and you expect anyone working with you to work just as hard and level up to what you expect or above. Anything less is unacceptable.

I think this is the discomfort that many of us feel with artificial intelligence. If we take shortcuts like this, we say that the process has no value. But for a lot of us, the process is the work. The process is where value is generated, and the process is what we’ve mastered. It’s what we’re committed to.

So asking artificial intelligence to write the blog post, design the logo, or program a website feels about as alien to a creative as hiring out somebody else to build the deck would to a carpenter. It suggests that the bar is set low, and that perhaps we are no longer capable of reaching it on our own. (If I were hiring a carpenter and I found out they outsourced their deck, I am not sure I would hire them.)

Shane continues on the bottom of page 79 through to 80:

Masters of their craft don’t merely want to check off a box and move on. They’re dedicated to what they do, and they keep at it. Master-level work requires near fanatical standards, so masters show us what our standards should be. A master communicator wouldn’t accept a ponderous, rambling email. A master programmer wouldn’t accept ugly code. Neither of them would accept unclear explanations as understanding.

We’ll never be exceptional at anything unless we raise our standards, both of ourselves and of what’s possible. For most of us, that sounds like a lot of work. We gravitate toward being soft and complacent. We’d rather coast. That’s fine. Just realize this: if you do what everyone else does, you can expect the same results that everyone else gets. If you want different results, you need to raise the bar.

Neither of them would accept unclear explanations as understanding” sums up the entire situation: each time we use AI, we are essentially saying we are fine with somebody else doing this work inside a black box. The tools reveal our priorities. If we rely on AI, we don’t become masters. At some point, the reliance on the tool masters us.