AI, Critical Thinking, and the Future of Freedom

University professors around the world are struggling to adapt to a student body that is, for the first time in history, able to generate essays and conduct other sophisticated tasks in just a few seconds by ordering an Artificial Intelligence (AI) chatbot such as ChatGPT to roll up its digital sleeves and do the dirty work.

Since ChatGPT’s launch, I took the general viewpoint that higher education would eventually have to come around to allowing students to use such technologies as just another tool for research and writing, just as calculators are usually considered fair game in math classes. I was then delighted to stumble upon the AI chatbot policy of Stephen Hicks, Professor of Philosophy at Rockford University in Illinois. His policy reads as follows:

I encourage you to use ChatGPT.

It is a powerful new research tool. Anything that enables you to learn faster and become more skillful is to be embraced.

At the same time, a tool is not a substitute for your own self-development. As a student, your goal is to acquire as much new knowledge as you can and to become skillful with every useful learning tool available. The goal is for you to become knowledgeable and wise, and for you to become excellent at research and judgment.

Metaphorically: Become a lean, mean learning machine. And make that a personal goal and a matter of honor.

If you are taking this course for credit, it is your responsibility to demonstrate that the work you submit is your own. There are many ways to do that, and you and I can consult individually to determine which way is best for you.

Stephen R. C. Hicks
Professor of Philosophy

As you see, while Hicks recognizes the importance of the students’ own individual self-development, he still embraces an important disruptive technology that can push beyond the more laborious traditional methods of researching and writing.

This reminds me of my 7th grade math teacher. She was an elderly lady who was often capable of working out long division math problems in her head, and she recounted stories of people she knew in her youth who were even more capable. (We, her students, worked it out on paper). Given that she grew up before the launch of the pocket calculator, her more advanced quantitative skills revealed the extent to which she was a product of her time. Similarly, as a student during middle school, I was (I believe) better at spelling than I am now, but then software (word processors) disrupted that with spell check. “Spelling Bees” seemed to disappear as I got older. While I am more reliant on spell check than I was in my younger years, it seems that this has freed up both cognitive space and time for me to be able to focus on more advanced (and less mundane) tasks. This phenomenon was articulated beautifully by Alfred North Whitehead:

It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them.

It seems to me that perhaps, for the first time ever, it could be critical thinking itself that is disrupted (at least at scale; a calculator does the same at a much smaller scale).

A new problem for freedom?

If I am correct that AI will disrupt critical thinking itself, then it appears that we have a new potential threat to a free society. Consider, for a moment, the motto of the Universidad Francisco Marroquín in Guatemala:

The education and spreading of ethical, judicial, and economic principles of a society of free and responsible persons

Being free and responsible comes with an implied necessity for a citizenry capable of thinking for itself. But if I am correct that AI is on its way to disrupt critical thinking itself, then new generations of people may never learn critical thinking in the first place. If a free society requires a citizenry of free and independent thinkers, the future of freedom is confronted with a new problem. 

Or, perhaps that worry is overly pessimistic. The way I see it, a combination of at least a couple of things is likely to happen. Upon first glance, they may seem at odds, but we could see that in some way one is true, while in another way, the other is simultaneously true. 

  1. Critical thinking is diminished because of an overreliance on AI. AI here, over the long-term, serves as a crutch. (This is the main concern I raise in this article). 
  2. Critical thinking reaches new heights, aided by AI.

What remains of raw, mere-human critical thinking (without aid of AI) will likely need to be channeled into a certain healthy distrust for the biases of the AI bots and their developers, and into understanding the incentives under which the developers (and those who financially support them) operate.

Whatever the outcome, let’s hope that AI (on the whole) works to serve as a tool for the betterment of the human condition, allowing for “a society of free and responsible persons.”

The post AI, Critical Thinking, and the Future of Freedom was first published by the American Institute for Economic Research (AIER), and is republished here with permission. Please support their efforts.