During the late 1900s (1997) I learned about this theoretical concept called artificial intelligence, in high school.

I asked my cousin (a CS college grad) about it. He explained that AI is a research field that is trying emulate a human brain but with a computer program.

I wondered if that would be a nuisance? For instance, while I’m studying I get distracted and decide to start watching TV (Freewill?). What if the computer did the same? We ask it to compute the answer to a differential equation and in the middle of the calculation it decides to go play Prince of Persia or something? He assured me that my thinking was flawed.

Fast forward to the early 21st century. We now have LLMs that can hold a conversation with humans, write terrible poems, create paintings etc. This is the closest we’ve come to emulating a human brain (I know this is not AGI, stop sending me errata).

Simon recently blogged about this idea called Vibes Based Development.

As a computer scientist and software engineer, LLMS are infuriating.

I’m used to programming where the computer does exactly what I tell it to do. Prompting an LLM is decidedly not that!

Computer programs produce reliable, repeatable output. LLMs are definitely not that.

Could it be because LLMs have freewill (unlike a computer program)?

What if freewill is a requirement for having intelligence?