Some musings on definitions of terms about AI and to what extent we should regard AI as being able to think and reason. To start off, let's get some definitions out of the way (mostly pulled from Google, which uses the Oxford Languages dictionary):
Think: direct one's mind toward someone or something; use one's mind actively to form connected ideas.
Reason: think, understand, and form judgments by a process of logic.
Mind: .the element of a person that enables them to be aware of the world and their experiences, to think, and to feel; the faculty of consciousness and thought.
Understand: perceive the intended meaning of (words, a language, or a speaker).
Comprehend: grasp mentally; understand.
For the last two, I prefer the Merriam-Webster definitions:
Understand: to grasp the meaning/reasonableness of
Comprehend: to grasp the nature, significance, or meaning of
Given these, I think we can turn to AI, and what it has the potential for:
It is very important to note that a mind is not at all the same as a brain. If you don't believe me, then show me a mind. You won't be able to, as a mind is not a material thing. At best, you would show me a brain and tell me that it houses a mind, or you would show me something that you think simulates or models a mind (like an LLM), but you can't actually show a human mind (the only ones that we know exist on earth).
So, I think we can conclude that an AI does not have a mind. At least, not a real one. It may be an artificial mind, or a pseudo-mind, but it is questionable to say it has a genuine one. Perhaps most importantly with related to this is that AI's don't have genuine feelings or will. LLMs, at least, are functionally just an inner monologue, so they can sort-of think, but their feelings are just a handful of numbers that can be directly manipulated, and their ability to choose is literally just directly obeying whatever (pseudo) random number generator they are using. Given the lack of real feelings and a will in any sense, I think we can't really say they have a mind.
I think the lack of feelings is actually worth an extra point: In a very important sense, AIs are like extreme psychopaths – they might be able to fake having feelings to give socially acceptable responses, but they don't actually feel those things. Dwell on that idea a bit before you decide to hand over control of things to an AI. They certainly don't have a God-given conscience convicting them when they do wrong. Are you really willing to hand control of things over to something that could harm you without even having the potential to feel any guilt or remorse?
It seems to me there is not much of an issue there, at least on the expertise side of things. I question whether AI can actually comprehend significance rather than just offering scholarly analysis, but if you just want to use AI as an expert that doesn't matter. If you want to use it as a friend, perhaps it does, but it could probably function as a therapist even if it isn't suitable as a confidant.
I personally feel the think portion of the definition is more descriptive of how reasoning typically works rather than a boundary of what counts as reasoning. I think the last part of the definition ("by a process of logic") is the boundary of the word, and I think AI falls within that boundary. Basically, I think AI can spit out correct formal logic, and I think reasoning is closely related to that, so I think it is fair to say that an AI is capable of reasoning.
I think we should be careful here. An LLM, especially, is essentially just an inner monologue. It is just a prediction of the next text in the sequence, over and over again. I don't think this is what we mean when we tell someone to think, or when we say we are thinking. I especially don't think the AI's <think>...</think> blocks are at all like real thinking. If anything, I think it would be better to think the other way around – for the AI, your input to it is basically the same as an imaginary person in your mind saying a particular thing, and then you imagine how you would respond.
Our rights come from God. He created us, and we have intrinsic value because He made us particularly in His image. From that, we have certain obligations (such as not killing other people), and there are consequences for violations. AI, instead, is something that we made, like a car. It may have value, but the level of value is not the same. Additionally, AIs can be created and destroyed just like ordinary computer files. A simple copy and paste creates a "new" AI. Really, running the program twice is effectively creating a "new" AI. So even from a materialistic perspective, there is this fundamental difference: compared to AI, humans are endangered species.