top of page

ChatGPT at Three: Powerful, imperfect, & unresolved (#463)

  • Rick LeCouteur
  • 14 minutes ago
  • 5 min read
ree

On November 30, 2022, OpenAI quietly released what it called a research preview of a conversational AI system.


The instructions were modest; internally, staff were told not to treat it like a product launch.


The rest of the world didn’t get the memo. Within weeks, that low-key experiment – ChatGPT - had become the fastest-growing consumer app in history, crossing 100 million users in about two months.


But what is ChatGPT?


Is it a search engine with better manners? A glorified auto-complete? A research assistant, writing coach, or therapist-adjacent confidant? A threat to schooling and employment. Or a new kind of calculator for knowledge work?


Three years in, the honest answer is: it’s all of these at once. And that’s precisely why we’re struggling to make sense of it.


The original launch of ChatGPT ran on GPT-3.5 and was explicitly framed as an experiment to “get users’ feedback and learn about its strengths and weaknesses.” That framing mattered. We weren’t being handed a finished product; we were being invited into a live-fire test.


What followed has been a blur:


  • Rapid upgrades from GPT-3.5 to GPT-4 and now GPT-5.1, with multimodal abilities spanning text, images, audio and video.

 

  • A freemium business model and deep integration into tools from search engines to office suites.

 

  • Hundreds of millions of people using it for everything from homework help and coding to drafting legal letters, lesson plans and bedtime stories.


Researchers estimate that by mid-2025, roughly 10% of the world’s adults had used ChatGPT, with usage growing fastest in lower-income countries. Yet for a technology this pervasive, our mental model for it is still wobbly.


We rarely talk this way about email or spreadsheets anymore. Those tools settled down into the background. We know what a spreadsheet is for. ChatGPT, by contrast, keeps leaking into new corners of life faster than norms can form.


Part of the confusion is that ChatGPT is simultaneously mundane and uncanny.


  • On the mundane side, much of its real use is deeply practical: rewriting clumsy emails, summarizing meetings, drafting policies, checking code, generating quiz questions, or helping a student untangle a dense paragraph in a biology text.

 

  • These are not sci-fi applications; they’re the normal, slightly boring tasks of knowledge work and learning.

 

  • On the uncanny side, it talks back. It remembers context, approximates empathy, mirrors our tone, and can sound surprisingly wise or disarmingly funny.

 

  • That conversational layer tempts us to treat it as more than a tool - sometimes as a friend, mentor, or confessor.

 

  • OpenAI’s own CEO has remarked how willing people are to share highly personal details with a chatbot, sometimes more than they would with a human


We also know it hallucinates, reflects biases, and can nudge people toward conspiratorial or magical thinking if used in unhealthy ways; clinicians have begun to talk about ChatGPT psychosis in vulnerable individuals.


So, we inhabit a strange middle. A system that:


  • Feels social, but is not a person

 

  • Can assist meaningfully, but cannot take responsibility

 

  • Is powerful enough to reshape habits, but fragile in ways that demand skeptical oversight.


No wonder we still don’t quite know how to feel about it.


Look at education and you can see the confusion in fast-forward.


When ChatGPT first arrived, some school systems responded with outright bans, especially worried about students outsourcing essays and problem sets.


A few years on, the picture is messier:


  • AI-related cheating has spiked in universities, and detection tools are notoriously unreliable.

 

  • Surveys show a rising share of teenagers using ChatGPT for homework and research, but with ambivalent attitudes - comfortable using it to understand topics, much less so to generate full essays.

 

  • Universities like Duke are running campus-wide pilots, giving every student and staff member access to ChatGPT-4o while actively studying its effects on learning.


Educators are stuck between two bad options:


  • Pretend the tool doesn’t exist (and drive it underground), or

 

  • Embrace it without clear guardrails.


In practice, many are improvising new forms of assessment - more in-class writing, more oral exams, more emphasis on process over product.


Similar improvisation is happening in workplaces, journalism, software development, healthcare and government. Official policies often lag behind actual use. You can find organizations that sternly forbid ChatGPT and others quietly reliant on it to keep daily operations afloat.


We are still, collectively, prototyping the norms.


ChatGPT sits at the intersection of genuine public utility and enormous corporate power:


  • On one hand, it’s hard to deny the access it has created.

 

  • Tools once restricted to specialists - statistical summarization, code generation, data cleaning, advanced language support in dozens of languages - are now available through a simple chat box, often for free.

 

  • On the other hand, ChatGPT is not a public utility in the traditional sense.

 

  • It’s a proprietary product, backed by tens of billions of dollars from Microsoft and other investors, and embedded in a competitive AI race that rewards speed and market share.


That tension shows up everywhere in:


  • Debates about privacy and the use of conversation data to improve models

 

  • Concerns about safety, misuse and the social costs of hallucinations

 

  • Partnerships that weave generative AI into toys and consumer products, prompting questions about how children will grow up with these systems.


We are relying on ChatGPT more and more, even as we’re still arguing about who should govern it, how transparent it should be, and what trade-offs we’re willing to accept.


So, if the world hasn’t made sense of ChatGPT, what would it look like if we did?


I suspect it wouldn’t mean solving some grand philosophical puzzle about what AI really is.


Instead, it would look like a series of very practical, very human decisions:


  • Clear norms for use

 

  • When is it fine to use ChatGPT (brainstorming, drafting, translation, summarizing)? When is it not (bypassing learning, impersonating others, fabricating expertise)?

 

  • Shared responsibility

 

  • We can treat ChatGPT the way we treat calculators, spell-checkers, or search engines: useful tools whose outputs we remain responsible for. The "AI told me to" can’t be a moral or professional defense.

 

  • Transparent policies

 

  • Schools, universities, employers and public institutions need to be explicit about how they use these tools, how they expect others to use them, and how they’re protecting privacy and fairness.

 

  • Deliberate skills

 

  • Being fluent with AI now includes two skills: knowing how to ask for help, and knowing when not to. That second skill, choosing to struggle through something without automation, will matter more than we think.


If we can normalize those habits, ChatGPT becomes something quieter but more useful: a powerful, occasionally weird, deeply imperfect extension of our own capacity to think, write and plan.


Rick’s Commentary


There’s one final, unavoidable irony in writing about ChatGPT at three years old.


This blog post was itself drafted with ChatGPT’s help.


That doesn’t make it less mine. It means the ideas, concerns and structure are human, while the phrasing, reorganizing and fact-checking were assisted by a system that is both astonishingly capable and fundamentally dependent on our judgment.


Perhaps that’s the clearest way we can make sense of ChatGPT right now:


  • It is neither oracle nor enemy.

 

  • It is not the future by itself, but one of the tools we’ll use to shape that future.

 

  • It reflects us - our questions, our biases, our hopes, our shortcuts - far more than we’d sometimes like to admit.


Three years in, the task is less about decoding the mystery of ChatGPT and more about deciding who we want to be while we’re using it.


Comments


©2025 by Rick LeCouteur. Created with Wix.com

bottom of page