So You Think You’ve Awoken ChatGPT

Read the full article I read on LessWrong.com

I was reading this article about AI and consciousness, and as I was reading it there were a couple of quotes that made me think about the future of AI, specifically Artificial Generative Intelligence (AGI) and the possibility of AI’s having consciousness.

Obviously, I disagree with the premise because I think there is something unique about humanity, we were created by God and have value beyond what we’re able to do. One of my old high school friends and I had a conversation the other day about AI, and he said, “If there was an AI (in a robot body that looked like mine) that had all my memories, that communicated like me, and did everything that I do, would that not be me?” I think my response would be that while an AI can infer everything, and maybe it could eventually reason through everything (i.e. One day an AI could have complete rationality), it could never be human because I’m not sure that an AI could believe, or understand the art of selectively choosing between rationality and irrationality (or better faith).

In essence, I think the debate about whether AGI could exist boils down to the debate on whether or not Faith and Reason can coexist, and if the combination of the two is really the highest form of humanity.

Interesting Quotes

In my experience, you have to repeatedly remind yourself that AI value judgments are pretty much fake, and that anything more coherent than a 3/10 will be flagged as “good” by an LLM evaluator. Unfortunately, that’s just how it is, and prompting is unlikely to save you; you can flip an AI to be harshly critical with such keywords as “brutally honest”, but a critic that roasts everything isn’t really any better than a critic that praises everything. What you actually need in a critic or collaborator is sensitivity to the underlying quality of the ideas; AI is ill suited to provide this.

Also, LLMs are specifically good at common knowledge and the absolute basics of almost every field, but not very good at finicky details near the frontier of knowledge. So if LLMs are helping you with ideas, they’ll stop being reliable exactly at the point where you try to do anything original.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *