Robert Clougherty, CIO, CampusWorks Inc

With 35 years of Higher Education Experience, Dr. Robert Clougherty became CampusWorks CIO for Drew University.  Over the course of his career, he has served as a faculty member (tenured, full professor), Director, Dean, and Provost.  He has also had his own consulting company and an app-based start-up.  Over the course of his career, he has founded 2 colleges from scratch; he founded the Institute for Technological Scholarship; and served as Executive Director of the Tennessee Advanced Computing Technologies (TACT) Institute.  He has published in multiple disciplines ranging from online learning and literature to chemistry and environmental science.


ChatGPT has all of the higher education aquiver. Fears of AI writing essays, programs, or scripts abound, but there are some faculty making hopeful and resounding positive statements about how they will transform their teaching.  We have been here before, and in the conflict, it is the latter who usually wins, while the former becomes almost comic.

At the opening of The Hunchback of Notre Dame, the students are beginning to riot; the licensed copyist to the University states: “I tell you, sir, this is the end of the world.  The students were never so riotous before; it’s the cursed inventions of the age that are ruining us all, –artillery, bombards, serpentines, and particularly printing, that other German pestilence.  No more manuscripts, no more books!   Printing is death to bookselling.  The end of the world is at hand.” 

We have heard it all before.  Socrates declared that writing would destroy memory and critical thinking.  Calculators would prevent students from learning math.  Wikipedia would embed false information at every turn.  Laptops, phones, wearables, take your pick, and there is an opposition standing there.  Believers in tradition assume that all must be done by the human brain as that is the purest form of intelligence (put another way, they are saying that the way they learn, and think is superior to others).  I would like to challenge some of these assumptions.

A biological entity, and it need not be human or just a brain, is a magnificent and intelligent entity in its structure.  Each cell contains the information and intelligence (DNA) to replicate itself in a way that contributes to the entire entity.  In other words, the content (information) is the same as the process (in knowing how to replicate and copy itself) in such a way that the copy can copy itself.  It is brilliant architecture.

When it comes to computing, we do not have the luxury of such an architecture; we follow von Neumann’s architecture.  In this model, the CPU (processor) is separate from the memory (content, information, and instruction).  The CPU can do nothing without the instructions from the memory.  The memory is an entity without value if it is not connected to a processor.  Thus, the model works by separating memory and information storage from the actual processes wherein data is morphed into information and information is morphed into knowledge.

As a species, we humans have learned that we have access to more information than we can hold in our biological memory, and we have used that information to evolve life and technology.  We have managed to deal with large quantities of information by using a process similar to a von Neumann architecture by storing information outside of our brains.  In short, we developed the ability to store and find knowledge outside of ourselves.  This includes storing information (with libraries being the best example) being able to send information to others and to being able to even provide instructions on how to perform a process (most of our home appliances are covered in icons to convey how to operate them).  The development of our culture has occurred because of this ability.

The advent of writing by the Sumerians allowed us to symbolically store data and information outside of ourselves (as the majority of found cuneiform tablets are business records, they were storing data and information).  The result, writing, received a definitively negative vote from Socrates (whom we remember because his student Plato wrote it down) as the first recorded Higher Education reaction to an intellectual and information-processing technology.  Higher Ed’s record has not improved.  (Ironically, most will still tell you that the “Socratic method” is the superior methodology for Higher Education teaching.)

When pocket calculators first came out, they were likewise vilified.  They could actually perform the “intelligence process” outside of the human brain.  The popular reaction was that children would not be able to learn or perform mathematics—as an aside, consider what generations who grew up with calculators have been able to do mathematically with computers and data.  When personal computers first came out, they were likewise vilified.  My own dissertation advisor was upset that I was using a tool where I could cut, paste, and insert without having to retype the entire page.  The reaction to smartphones has also been similar.  Despite the absolute failure of every doomsday scenario about technology, the doomsday scenario becomes applied to the next technology.

As the development of search engines increased our ability to gather information from around the world, we were warned not to trust the internet.  When Wikipedia established a technology to have a single, linked, and searchable site for community learning, the majority of voices in higher education railed against it.    

In every instance, the world survived doomsday and students remained intellectually cogent, because as the technologies evolved, the warnings subsided, and the new skills required for the new technology became codified (despite Socrates’ objections, we teach entire courses in writing now), the technology allowed the curriculum to evolve based on the new skill sets.

And now we have Generative AI, and ChatGPT in particular.  Articles abound as to its danger and the destruction of learning as we know it.  It must be added that there is also a lot of important grassroots work at the faculty level on how to use it effectively (as there has been wisdom on nearly all advancements that have taken place).  To calm the objectives, we have to clarify what it is that we are discussing.  If we split the term, the GPT (Generative Pre-trained Transformer) is Artificial Intelligence.  The reactions and fears that have emerged indicate its strengths.  For Higher Education, it represents both new opportunities as a teaching and learning tool (it stores and processes information in the same system), and, in an industrial sector looking for financial savings and efficiencies, it represents an opportunity to become more productive and efficient.  

Two important caveats have to be added here.  First, efficiency does not mean job replacement, it means being able to perform current tasks better and freeing up time to perform those tasks that individuals often say they are too busy to perform.  Second, ChatGPT works best in interactive development.  It is a conversation with the user providing the appropriate prompts.  In other words, humans do not become less important, they become more important, and skill and knowledge become increasingly important, so a user knows which questions to ask to get the best response from ChatGPT.  Aristotle, in The Posterior Analytics, argues that all knowledge relies on pre-existent knowledge.  ChatGPT is only as good as the prompts and guidance the user provides.  

So, what makes ChatGPT so terrifying?  It is the Chat element.  AI has been around for a long time, and what we have now is the ability for anyone with basic language skills to be able to use it.  In other words, you no longer need to know how to program to use Artificial Intelligence.  In many ways, this democratization of computing power should be celebrated.  

Aristotle argued that the ability to communicate intelligent thought makes us human, and ChatGPT does that.  One of the other things that make us human is individuality and collaborative tasks.  When one sees other species in a group, they follow and mimic behavior rather than dividing tasks and taking individual roles.  Non-intelligent cyber systems do the same—each user providing the same input receives the same output.  Generative AI shifts the ground and does behave in an individual way.

The biggest fear and argument is that AI could take over and eradicate humans.  Perhaps the real fear is not about AI but a fear about ourselves that we cast on the machine.

Content Disclaimer

Related Articles