In “Can Computers Think?” John Searle argues against the prevailing view in philosophy, psychology, and artificial intelligence, which emphasizes the analogies between the functioning of the human brain and the functioning of digital computers. (Searle, 372) He asks whether a digital computer, as defined, can think. Specifically, he asks whether instantiating or implementing the right computer program with the right inputs and outputs is sufficient to, or constitutive of, thinking, to which he answers no, since “computer programs are defined purely syntactically.” (Searle, 376) In this essay, I will argue that, according to Searle’s own definition of semantic understanding, computers do have at least a minimal amount of semantics. I will argue that Margret Boden’s objections to Searle’s argument in “Escaping from the Chinese Room” are strong and that the internal symbols and procedures of computer program “do embody minimal understanding.” (Boden, 387)
I will begin this essay by investigating Searle’s Chinese room thought—experiement. This thought—experiement is meant to simulate the processes of a digital computer. I will detail how, according to Searle’s own multiple definitions of thinking, the person inside the Chinese language room is in fact thinking, citing arguments from Boden. I will conclude the essay by arguing that syntactical processes involve a certain amount of prior semantic understanding, and that instantiating or implementing the right computer program with the right inputs and outputs is sufficient to, or constitutive of, Searle’s definition of thinking.
To differentiate syntactical processes from semantical understanding, Searle uses a thought—ex ...