{"id":2299,"date":"2026-04-10T23:56:57","date_gmt":"2026-04-10T22:56:57","guid":{"rendered":"https:\/\/www.oubliette.org\/blog\/?p=2299"},"modified":"2026-04-11T10:56:56","modified_gmt":"2026-04-11T09:56:56","slug":"everybodys-got-one","status":"publish","type":"post","link":"https:\/\/www.oubliette.org\/blog\/index.php\/2026\/04\/10\/everybodys-got-one\/","title":{"rendered":"Everybody\u2019s Got One"},"content":{"rendered":"\n<p class=\"has-drop-cap\">If you woke up this morning hoping for one more person\u2019s take on all this \u2018AI\u2019 stuff, I guess it\u2019s your lucky day.<\/p>\n\n\n\n<p>You won\u2019t find a(nother) rant about how large language models (LLMs) <a href=\"https:\/\/www.experimental-history.com\/p\/bag-of-words-have-mercy-on-us\">aren\u2019t all that \u2018intelligent\u2019<\/a>, how they pose an <a href=\"https:\/\/www.goodreads.com\/book\/show\/228646231\">existential risk to humanity<\/a>, <a href=\"https:\/\/arstechnica.com\/ai\/2026\/04\/research-finds-ai-users-scarily-willing-to-surrender-their-cognition-to-llms\/\">make people dumber<\/a>, are eroding our ability to <a href=\"https:\/\/www.ivanturkovic.com\/2026\/02\/25\/ai-made-writing-code-easier-engineering-harder\/\">build<\/a> and <a href=\"https:\/\/pluralistic.net\/2026\/01\/06\/1000x-liability\/\">maintain<\/a> software, are <a href=\"https:\/\/fortune.com\/2026\/04\/06\/ai-tech-displacement-effect-gen-z-16000-jobs-per-month\/\">eliminating millions of jobs<\/a> thereby upending the global economy, or how the whole thing is a <a href=\"https:\/\/www.wheresyoured.at\/the-subprime-ai-crisis-is-here\/\">bubble floating over a pyramid (scheme)<\/a> and <a href=\"https:\/\/catvalente.substack.com\/p\/blood-money-the-anthropic-settlement\">built entirely on theft<\/a>.<\/p>\n\n\n\n<p>All of those things seem true to me to varying degrees (especially the theft part), but that\u2019s not what this is about. So if a rant about one of those what you\u2019re looking for, best keep looking.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>Ok, I said no rant.  I lied.  For the record, I utterly abhor that a small number of sociopaths are building empires and fortunes based on what can only be described as blatant and intentional theft. It\u2019s not the first time this has happened &#8211; see: <a href=\"https:\/\/en.wikipedia.org\/wiki\/Robber_baron_(industrialist)\">robber barons<\/a> &#8211; and it probably won\u2019t be the last. But these people are \u201csorta my people\u201d and my small part in enabling this reality fills me with no small amount of regret. Even if AI ends up being amazing (and to me the jury is most definitely still out), the original sin will remain. <\/p>\n<\/blockquote>\n\n\n\n<p>If you\u2019re early in a career in software and looking for guidance, or predictions about the course of the industry, there are <a href=\"http:\/\/brooker.co.za\/blog\/2026\/02\/07\/you-are-here.html\">lots<\/a> of <a href=\"https:\/\/christophermeiklejohn.com\/ai\/engineering\/2026\/04\/01\/software-engineering-is-becoming-civil-engineering.html\">places to look<\/a>. If you\u2019re well into&nbsp;that career arc and trying to orient and navigate there are no <a href=\"https:\/\/www.jamesdrandall.com\/posts\/the_thing_i_loved_has_changed\/\">shortage<\/a> of <a href=\"https:\/\/nolanlawson.com\/2026\/02\/07\/we-mourn-our-craft\/\">thought-provoking<\/a> and often <a href=\"https:\/\/leehanchung.github.io\/blogs\/2026\/04\/05\/the-ai-great-leap-forward\/\">depressing<\/a> <a href=\"https:\/\/andrewmurphy.io\/blog\/the-five-stages-of-losing-our-craft\">perspectives<\/a> to <a href=\"https:\/\/dev.to\/harsh2644\/ai-is-creating-a-new-kind-of-tech-debt-and-nobody-is-talking-about-it-3pm6\">consider<\/a>.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>If you\u2019re looking to understand how these chat bots (and the large language models they\u2019re built on) do the seemingly magical things they do, take a few hours (!) and <a href=\"https:\/\/www.youtube.com\/watch?v=7xTGNNLPyMI\">let Andrej Karpathy explain<\/a>.<\/p>\n<\/blockquote>\n\n\n\n<p>All I\u2019m offering is a (hopefully cogent and coherent) exposition of what\u2019s in <em>my<\/em> head.<\/p>\n\n\n\n<p>Caveat emptor.<\/p>\n\n\n\n<p>I\u2019m not an Artificial Intelligence expert. Like others \u201cof that age\u201d I had a dream of teaching machines to think. In my teens I voraciously consumed science fiction, read futurists like <a href=\"https:\/\/en.wikipedia.org\/wiki\/Hans_Moravec\">Hans Moravec<\/a>, and was convinced we were probably \u201creally close\u201d to being able to build thinking machines. For a while I considered studying cognitive neuroscience on top of computer science, but realized it was probably biting off more than I could comfortably chew.<\/p>\n\n\n\n<p>I <em>was<\/em> interested enough to do some related coursework during my computer science undergraduate, in the mid 90\u2019s. At the time the <em>practical<\/em> state of the AI art were things like <a href=\"https:\/\/en.wikipedia.org\/wiki\/Expert_system\">expert systems<\/a>, and there was a bit of an \u201cis it\/isn\u2019t it\u201d tug-of-war with the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Information_retrieval\">information retrieval<\/a> folks who focused on organizing information in ontologies and graphs. Image recognition and natural language processing were unsolved areas of active research, and you could tell the \u201cAI people\u201d \u2018cause they wrote code in <a href=\"https:\/\/en.wikipedia.org\/wiki\/Prolog\">Prolog<\/a> or <a href=\"https:\/\/en.wikipedia.org\/wiki\/Lisp_(programming_language)\">Lisp<\/a>.<\/p>\n\n\n\n<p>I ended up being pretty disillusioned by the state of the art and didn\u2019t give it much more time or attention for years.<\/p>\n\n\n\n<p>I started paying attention again around ten years ago. Advances in compute (and a bit in algorithms) made things that had been \u201cimpossible\u201d (or at least impractical) suddenly possible or \u201cadjacent possible.\u201d This was right around the time a bunch of non-PC gamers started to care about GPUs (Graphics Processing Units).<\/p>\n\n\n\n<p>Since then I\u2019ve built stuff &#8211; and helped teams build stuff &#8211; using machine learning (a term I strongly prefer over \u201cAI\u201d), and I\u2019ve built and trained small models, and used the current (recent?) crop of large language models enough to have a sense of their current capabilities and limitations.<\/p>\n\n\n\n<p>So I guess I\u2019d describe myself as \u201cnot completely clueless.\u201d<\/p>\n\n\n\n<p>So\u2026 \u201cAI.\u201d Let\u2019s start with two dirty little secrets.<\/p>\n\n\n\n<p><span style=\"text-decoration: underline\">Dirty-little-secret #1<\/span>: In the decades I\u2019ve been in the software industry, I\u2019ve rarely loved the act of writing code.<\/p>\n\n\n\n<p>I loved (and still love, mostly) solving problems for people who couldn\u2019t solve those problems themselves. Writing code was a means to that end &#8211; not the end.<\/p>\n\n\n\n<p>I didn\u2019t <em>hate<\/em> writing code, but knowing I\u2019d figured out a solution was the really rewarding bit. And (or maybe \u201cSo\u201d) I was really never \u201cthe best coder\u201d in a group. I was a \u201cpretty good programmer\u201d and I worked to develop good habits that let me collaborate with people who were better than me and only rarely feel like the idiot holding us back.<\/p>\n\n\n\n<p>I also had some experience early in my career that forced me to realize that over time &#8211; especially as the people who wrote it disburse &#8211; code becomes more of a liability than an asset. So, in the long term, less can very much be more.<\/p>\n\n\n\n<p><span style=\"text-decoration: underline\">Dirty-little-secret #2<\/span>: One of my few persistent \u201ccareer goals\u201d has been to put myself out of a job. To make my role unnecessary. I think of it as being \u201clazy in the long term\u201d &#8211; willing to work hard on a problem today so I can stop working on or even thinking about that problem entirely \u201ctomorrow.\u201d<\/p>\n\n\n\n<p>Despite the marketing hype, the current generation of LLM-based tools don\u2019t have the potential to make \u201cpeople like me\u201d obsolete.  What they do have is the potential to drastically reduce the number of people who need \u201cpeople like me\u201d to help them solve problems with computers and technology.<\/p>\n\n\n\n<p>In a world that \u201csoftware ate,\u201d but where most people can\u2019t self-service their software needs, these tools have tons of potential for disintermediation and empowerment. Disintermediation and empowerment seem good.<\/p>\n\n\n\n<p>So you might think I\u2019d be&nbsp;<strong>loving<\/strong>&nbsp;these tools.<\/p>\n\n\n\n<p>I thought I would, too. <\/p>\n\n\n\n<p>But, it turns out, I do not.<\/p>\n\n\n\n<p>That\u2019s not to say I <em>hate<\/em> them. I don\u2019t. The opposite of love isn\u2019t hate. &nbsp;It\u2019s <a href=\"https:\/\/quoteinvestigator.com\/2019\/05\/21\/indifference\/\">indifference<\/a>.<\/p>\n\n\n\n<p>On a personal level, I\u2019m mostly indifferent. <\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>On the broader implications and impact of these tools and technologies it&#8217;s closer to say that I\u2019m <a href=\"https:\/\/dictionary.cambridge.org\/dictionary\/english\/ambivalent\">ambivalent<\/a>, and from a professional&nbsp;perspective not actively embracing AI is potentially problematic. Those are both topics for another time.<\/p>\n<\/blockquote>\n\n\n\n<p>I\u2019ve come to realize that I just don\u2019t enjoy using these tools, and I <em>really<\/em> struggle to convince myself that the value they have is worth their costs. (NB: Their <em>actual<\/em> costs. Not just the loss-leading \u201cwe\u2019ll make it up in volume\u201d fistfuls of dollars each month they\u2019re priced at today.)<\/p>\n\n\n\n<p>They\u2019re too often opaque, capricious and unpredictable. making&nbsp;it unwise to trust their results. &nbsp;That&nbsp;makes me not reach for them to answer questions, or solve problems, unless I\u2019m already confident I know the answer.<\/p>\n\n\n\n<p>I think about the output from an LLM as I might the writings of a hard-line political pundit. Everything has to be skeptically considered. Everything has to be fact checked. It turns out that without <a href=\"https:\/\/www.mediamatters.org\/ann-coulter\/endnotes-coulters-latest-book-rife-distortions-and-falsehoods\">chasing the footnotes<\/a>, you\u2019ll never know if the reference material really says that, or even if it exists.<\/p>\n\n\n\n<p>Maintaining the appropriate level of skepticism is real work.<\/p>\n\n\n\n<p>I find that these tools <em>transform<\/em> work but don\u2019t reliably reduce or eliminate it.<\/p>\n\n\n\n<p>They turn writing English into reading and re-writing English. <\/p>\n\n\n\n<p>They turn writing code into reading reasoning about and fixing code.<\/p>\n\n\n\n<p>They turn fact and knowledge seeking into, well, fact and knowledge seeking.<\/p>\n\n\n\n<p>Don\u2019t read this as me saying these tools have no value. That\u2019s not my point at all. I\u2019m making a bounded statement about my experiences with these tools.<\/p>\n\n\n\n<p>The other thing I\u2019ve learned, which surprised me at first, is that using an LLM to answer a question, or write code, or solve a problem makes me feel \u2026 nothing.<\/p>\n\n\n\n<p>I was drawn to computing, and ultimately studied and pursued it as a profession, because I found it <em>rewarding<\/em>. Not just financially &#8211; though getting well paid to do something I enjoyed was certainly not a bad thing. Solving problems &#8211; especially tricky problems &#8211; feeds my brain endorphins.<\/p>\n\n\n\n<p>And who doesn\u2019t love endorphins, right?<\/p>\n\n\n\n<p>I love learning, and each time I attack a problem &#8211; win or lose &#8211; it changes me a bit. Teaches me something.<\/p>\n\n\n\n<p>I get none of those rewards when I use these tools. The victory feels hollow. As if I\u2019ve cheated. Or have <em>been<\/em> cheated.<\/p>\n\n\n\n<p>A friend and former colleague said we\u2019ve \u201c\u2026largely become a culture of answer seekers, not knowledge seekers. We want the answer, but don\u2019t particularly care to understand why or how. This was a problem before AI.\u201d<\/p>\n\n\n\n<p>I think he\u2019s right, and his observation touches a nerve. One of my most valuable (and most irritating) habits was instilled in me at a young age by my uncle Denis &#8211; an actual working scientist who told me to &#8220;Always ask why.&#8221;<\/p>\n\n\n\n<p>Supressing that impulse, lessening the drive to understand, makes me\u2026 sad.<\/p>\n\n\n\n<p>I stumbled across an analogy that resonated with me &#8211; using language models and chat bots to write or solve \u201cthinking\u201d problems for you, this author said, is like bringing a forklift to the gym to lift weights. If your only goal is to lift the weights, fantastic, job done &#8211; provided the model doesn&#8217;t drop the weight on someone&#8217;s toes, or decide to drive through the locker room instead. But if any part of the goal is to <em>become a person who can lift weights<\/em> &#8230; learning to drive a  forklift is becoming someone who can drive a forklift, not someone who can lift weights.<\/p>\n\n\n\n<p>If what you need to do is move lots of heavy things, over and over, day in and day out, and that&#8217;s all there is, by all means use a forklift. Just realize that what you&#8217;re getting good at is driving a forklift.<\/p>\n\n\n\n<p>This might lead you to ask the entirely reasonable &#8211; \u201cso what?\u201d<\/p>\n\n\n\n<p>Well, from a personal perspective, I just don\u2019t use these tools much. I don\u2019t pay for a chatbot and don\u2019t imagine that changing. I occasionally ask Gemini questions &#8211; when it\u2019s not outright fabricating things the model is pretty good at summarization. I\u2019ve had Claude write code &#8211; especially when I already know what the code needs to do and the cost of verifying it\u2019s \u201cdone it right\u201d is lower than the cost of me just doing it.<\/p>\n\n\n\n<p>But I don\u2019t use AI every day. Or even most days.<\/p>\n\n\n\n<p>I don\u2019t ask a chatbot for feedback on my writing, for instance. I write it, read it, revise it, and sometimes ask other people to read and critique it too. So my writing has occasional typos, sometimes mixes metaphors (thanks, Matt!), and can be a bit awkward. <\/p>\n\n\n\n<p>And that&#8217;s ok.<\/p>\n\n\n\n<p>Over the years I\u2019ve been writing, I\u2019ve gotten better at it &#8211; and the point isn\u2019t just to lift the weights.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>If you woke up this morning hoping for one more person\u2019s take on all this \u2018AI\u2019 stuff, I guess it\u2019s your lucky day. You won\u2019t find a(nother) rant about how large language models (LLMs) aren\u2019t all that \u2018intelligent\u2019, how they pose an existential risk to humanity, make people dumber, are eroding our ability to build &hellip; <a href=\"https:\/\/www.oubliette.org\/blog\/index.php\/2026\/04\/10\/everybodys-got-one\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Everybody\u2019s Got One&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[7,5],"tags":[],"class_list":["post-2299","post","type-post","status-publish","format-standard","hentry","category-i-hate-computers","category-life-the-universe-and-everything"],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/www.oubliette.org\/blog\/index.php\/wp-json\/wp\/v2\/posts\/2299","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.oubliette.org\/blog\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.oubliette.org\/blog\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.oubliette.org\/blog\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.oubliette.org\/blog\/index.php\/wp-json\/wp\/v2\/comments?post=2299"}],"version-history":[{"count":107,"href":"https:\/\/www.oubliette.org\/blog\/index.php\/wp-json\/wp\/v2\/posts\/2299\/revisions"}],"predecessor-version":[{"id":2407,"href":"https:\/\/www.oubliette.org\/blog\/index.php\/wp-json\/wp\/v2\/posts\/2299\/revisions\/2407"}],"wp:attachment":[{"href":"https:\/\/www.oubliette.org\/blog\/index.php\/wp-json\/wp\/v2\/media?parent=2299"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.oubliette.org\/blog\/index.php\/wp-json\/wp\/v2\/categories?post=2299"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.oubliette.org\/blog\/index.php\/wp-json\/wp\/v2\/tags?post=2299"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}