Posts about AI

Artificial intelligence and mastery

What happens to our collective intelligence, on a long-term timeline, if we allow the robots to do our thinking for us?

Many people I follow online have already suggested standards are slipping across organizations who rely on AI. I don’t know anything about that; I work for myself and I have impeccable standards. (If anything, my standards are increasing over time, which is as it should be, probably.) 

That being said, to use artificial intelligence in your work is to lower your standards by at least a little bit. It’s a way of saying, yes, the plagiarism machine that is essentially a word-by-word prediction algorithm can do this part of my work, and there is no intrinsic value in me doing it.” (Artificial intelligence is naturally good at summarizing long text and making it shorter, because that is quite literally what it is designed to be good at. So it’s not a terrible way to start research, or get help shortening your lengthy email to the C‑suite, who are all probably summarizing your email with AI anyway. But when it comes to actually doing original work and thinking, artificial intelligence tends to be much less predictable and nowhere near the same level of quality. It is, after all, making it up as it goes.)

But I was reading Clear Thinking by Shane Parrish, and he says something very interesting on page 79

Most of the time when we accept substandard work from ourselves, it’s because we don’t really care about it. We tell ourselves it’s good enough, or the best we can manage given our time constraints. But the truth is, at least in this particular thing, we’re not committed to excellence.

When we accept substandard work from others, it’s for the same reason: we’re not all in. When you’re committed to excellence, you don’t let anyone on your team half-ass it. You set the bar, you set it high, and you expect anyone working with you to work just as hard and level up to what you expect or above. Anything less is unacceptable.

I think this is the discomfort that many of us feel with artificial intelligence. If we take shortcuts like this, we say that the process has no value. But for a lot of us, the process is the work. The process is where value is generated, and the process is what we’ve mastered. It’s what we’re committed to.

So asking artificial intelligence to write the blog post, design the logo, or program a website feels about as alien to a creative as hiring out somebody else to build the deck would to a carpenter. It suggests that the bar is set low, and that perhaps we are no longer capable of reaching it on our own. (If I were hiring a carpenter and I found out they outsourced their deck, I am not sure I would hire them.)

Shane continues on the bottom of page 79 through to 80:

Masters of their craft don’t merely want to check off a box and move on. They’re dedicated to what they do, and they keep at it. Master-level work requires near fanatical standards, so masters show us what our standards should be. A master communicator wouldn’t accept a ponderous, rambling email. A master programmer wouldn’t accept ugly code. Neither of them would accept unclear explanations as understanding.

We’ll never be exceptional at anything unless we raise our standards, both of ourselves and of what’s possible. For most of us, that sounds like a lot of work. We gravitate toward being soft and complacent. We’d rather coast. That’s fine. Just realize this: if you do what everyone else does, you can expect the same results that everyone else gets. If you want different results, you need to raise the bar.

Neither of them would accept unclear explanations as understanding” sums up the entire situation: each time we use AI, we are essentially saying we are fine with somebody else doing this work inside a black box. The tools reveal our priorities. If we rely on AI, we don’t become masters. At some point, the reliance on the tool masters us.

Typography and AI

I read something today that perfectly captures how I feel about artificial intelligence.

According to reporting from The Verge, Monotype (a company I do not like) is really pushing the idea that AI is coming for our fonts.” Which is a gross way to say that AI is coming for type designers, which is a gross thing to publicly get excited about.

Apart from the fact that we continue to discuss AI taking our jobs and our humanity from us (as though it’s desirable), the other problem here is that this future isn’t real. At least, not now:

AI, the report suggests, will make type accessible through intelligent agents and chatbots” and let anyone generate typography regardless of training or design proficiency. How that will be deployed isn’t certain, possibly as part of proprietarily trained apps. Indeed, how any of this will work remains nebulous.

Why Monotype would want to push any of this is beyond me. The Verge mostly attempts to draw similarities between today’s AI proclamations and the effects of industrialization on typography in the early 20th century. The metaphor is completely broken, because unlike these AI proclamations, the effects of industrialization were actually real.

And then, the money quote. This is in reference to Zeynep Akay, director at typeface design studio Dalton Maag:

It’s almost as if we are being gaslighted into believing our lives, or our professions, or our creative skills are ephemeral.”

It is exactly this! In a rush to get investor dollars, every company in the world is trying to tell professionals in every space (but particularly in white collar information work) that their jobs, livelihoods, and skill sets are irrelevant in the coming tide.

The current chatbots are useful tools, but any company claiming they’re replacing” workers with AI is attempting to paint a narrative about layoffs with a different colour. The tool just isn’t there. It’s especially not there for any work that requires creative thought, and because the entire AI chain is more or less word prediction based on prior knowledge, there isn’t much chance AI in its current incarnation could design anything actually new.

To put it bluntly, I don’t think there’s a snowball’s chance in hell that AI is designing typefaces for us any time soon.

Jony Ive, Sam Altman, and the way AI is changing the world

I found myself nodding in agreement while reading Jason Snell’s piece about OpenAI buying Ive’s tech design startup:

So OpenAI and Apple’s legendary design lead are embarking on a journey to build some new AI-enabled hardware. They’re coy about what it will be—probably not a phone, definitely not a watch, maybe not something you wear” — but my gut feeling is that it’ll be something we’ve actually seen before. My true prediction is that it’ll be more like the Humane Ai Pin or that AI Pendant but they’re embarrassed to be associated with those products, so they’re going to wait a little longer to let the stink clear.

I’m skeptical about OpenAI in general, because while I think AI is so powerful that aspects of it will legitimately change the world, I also think it has been overhyped more than just about anything I’ve seen in my three decades of writing about technology. Sam Altman strikes me as being a drinker of his own Kool-Aid, but it’s also his job to make everyone in the world think that AI is inevitable and amazing and that his company is the unassailable leader while it’s bleeding cash.

I think it’s important to clarify that OpenAI isn’t bleeding cash; they’re haemorrhaging it. This is all further reinforced by the fact that OpenAI is purchasing Ive’s company for an astronomical $6.5 billion, and all that money is in privately owned stock funded by equity firms, banks, and desperate venture capital companies.

With all that in mind, I find myself wondering what Ive and Altman’s new product could possibly be. Jony Ive insists that good design elevates humanity,” but because of the current direction of AI in our society, I don’t see how it’s possible any AI-based product could. 

I am not anti-AI. I use it frequently, particularly for summarizing in-depth Google research (my most recent example: what e‑sims should I consider for traveling around France and the Netherlands, and which options have the widest coverage?”). I also use it for rubber ducking, in which I copy and paste error messages from my code into the chatbot and get it to suggest potential solutions. (It never gets it right, but it at least gets me thinking.)

So I am not some sort of anti-AI Luddite or prude. I think AI chat bots have enormous potential for research, data analysis, and even as code assistants. However, it’s not clearly net positive for the world.

On one hand, I see many smart people I know fawning over this technology product that is often merely a very advanced version of Siri. I also see Jony Ive and Sam Altman making googly eyes at each other in a very awkward announcement video. Even Jony Ive, a man who makes product design sound like Aristotelian philosophy, is fawning over Sam Altman. 

It’s also become clear that many unhinged CEOs are sending outrageous emails about their expectations for AI. In summary: AI is coming for your job, so get ten times better at it quickly.” Mostly, it sounds like they expect their employees to do ten times the work for the same pay as before, as a baseline for keeping their job. AI is already being used to exploit the workforce.

For the first time in our collective history, knowledge workers face an extinction-level event, not dissimilar to what happened to blue-collar workers in the face of factory automation and machining. And Sam Altman and Jony Ive think that the best path forward is to continue developing a product they claim will be for everybody,” when all current signs point to AI mostly being a tool for incredibly wealthy CEOs to extract more value and productivity from an increasingly smaller workforce of disenfranchised employees.

How can an AI product elevate humanity” when it’s often used now to oppress and exploit the working class? Jony Ive is a very smart man, but his most recent work includes $3,000 jackets and personal branding for the king of England. Ive has a long history of designing luxury products for wealthy people. It is hard to imagine yet another tech product solving (or even avoiding) the problem its base technology has already created.

Unlike certain technocrats, I am unconcerned about a Terminator-like future. I am far more concerned with a future where technology has created an even larger disparity between the rich and the poor. Unless we can all figure out Universal Basic Income, (and conservative governments the world over are uninterested in that notion), I fear that we are creating a society that values the contributions of workers less than ever before, while the cost of living continues to become more unaffordable.

Sam Altman and Jony Ive spend a lot of time in their self-congratulatory video discussing San Francisco. This is what Sam had to say:

San Francisco has been, like, a mythical place in American history, and maybe in world history in some sense. It is the city I most associate with the leading edge of culture and technology. … The fact that all of those things happen in the Bay Area and not anywhere else on this gigantic planet we live on, I think, is not an accident.” 

This is true (even if it ignores much of the rest of the Valley and San Jose in particular), but it’s worth noting that this is all happening in the United States. Right now, the US is a country where move fast and break things” is quite literally the political policy of the White House. America leads in technology for many reasons, but the Valley’s attitude of move fast and break things” has become a global problem.

Forgive me, then, for being skeptical that ChatGPT and its competitors will change the world in a net positive way. Forgive me for having a hard time imagining Jony Ive ever again designing anything as revolutionary as the iPhone (which, for all its flaws, globally democratized technology, changed society and global commerce, and has created a ripple effect that will last decades). Forgive me for assuming that all this hype around this partnership is a lot of smoke and mirrors to make OpenAI’s investors feel like there’s still a fish on the line, rather than a hole in the boat.

I am an optimist about humanity, but I am unsure technology is always for humanity’s betterment. I do not see how, to use Sir Jony Ive’s words, AI elevates humanity.” Right now, it looks to me like the only person who’s been elevated in all this is Sam Altman.