Robert Clougherty, PhD, CIO, CampusWorks Inc

With 35 years of Higher Education Experience, Dr. Robert Clougherty became CampusWorks CIO for Drew University.  Over the course of his career, he has served as a faculty member (tenured, full professor), Director, Dean, and Provost.  He has also had his own consulting company and an app-based start-up.  Over the course of his career, he has founded 2 colleges from scratch; he founded the Institute for Technological Scholarship; and served as Executive Director of the Tennessee Advanced Computing Technologies (TACT) Institute.  He has published in multiple disciplines ranging from online learning and literature to chemistry and environmental science.

 

Let me begin with an experiment. Take a piece of paper and a pencil and divide 22,463 by 17, but before you do, one caveat.  As you write it, use Roman numerals… If you pursue that and do not give up in despair (“How do you write 20,000 in Roman numerals?”), you will realize that it is impossible because there are no decimals in Roman numerals. Pause, and consider for a moment the way in which knowledge is shaped, processed, and stored creates the opportunities, possibilities, and methodologies for us to work with it.  While we can identify large scientific shifts such as the Copernican, Darwinian, and Einsteinian, it is of note that each focuses on a specific individual (not to demean their contributions), but it makes a simpler narrative to focus on the character rather than the environment that shaped them (the English professor in me will ignore narratological theory for the moment).

Each of these changes in our management of knowledge and information resulted not simply in the tool or paradigm itself, but in how we think about knowledge as we learn how to approach knowledge and thinking—discovering new means and modalities to obtain new results and new possibilities.  While the shift to Hindu-Arabic numerals began with Fibbonaci in 1202, it took until the 15th century for them to truly take hold. Meanwhile, a rethinking of the knowledge world known as the Renaissance crossed Europe.  What began as demonstrations of the ability to use Hindu-Arabic numerals eventually led to the creation of calculus.  Interestingly enough, evolution occurs through mistakes as it is the mistakes that lead to progress.  The greatest example of this is Fermat’s Last Theorem.  Fermat wrote in the margin of a book “I have the proof of this but don’t have enough room in the margin” (a reaction to a problem that would have resulted in a failing grade on every university campus), resulted in new branches of mathematics, countless journal articles, and was not finally solved until 1995.  The Fermat Effect is about assessing responses by the amount of discourse that is generated as opposed to right/wrong.

And now we face generative Artificial Intelligence.  While most of the hype is about the Artificial Intelligence component, the more ground-shifting is the generative component in that it requires us to think differently.  One of the major differences in AI is the difference between discriminative AI and generative AI.  Discriminative AI focuses on the development of a “correct answer.”  A clear example is having AI read medical X-rays.  Multiple X-rays are uploaded into the system and tagged as either yes or no in terms of having the condition being scanned for.  The AI is then given a new example and makes the binary decision of yes or no; in other words, we can assess it as right/wrong—strictly within our comfort zone.  It assumes a correct answer, and for too many years, Higher Education has been pursuing the right answers: what is the correct enrollment strategy, what part of the E & G budget should be spent on technology, what are the correct identifiable retention triggers…  It gives us hope to know that there is a right answer that we only need to find.

The changes and furor that we have been hearing of in recent times relate to generative artificial intelligence.  Generative AI functions differently in that it takes the data it was trained on, identifies parameters, and then generates results which fit within those parameters but that do NOT exist.  In other words, it seeks to find that which does not exist.  The AI generates new points in a set which are not points that currently exist.  If I ask an image-generating AI for the image of a woman, and it presents a picture of the Mona Lisa, it has failed.  Currently, existing points can be found through “Search” (which searches through the known points) and does not require AI.

Generative AI is not about finding the “right answer.”  As a result, the critiques that ChatGPT got a given fact wrong are irrelevant, the system was not designed to create such answers or facts, it is designed to create responses based on existing material but combined in new ways.  Generative AI is much more complex than discriminative AI and much more computationally intensive.  In short, generative AI is designed to consider new, previously unconsidered solutions.  The discomfort for many is that we have spent the majority of our educational lives being told that the goal is to find the right answer as opposed to possible answers.

Discriminative AI follows a model we are more comfortable with given traditional algorithmic computing in that outputs are stable and predictable—what we know is verified as true.  If you continuously put the same input into traditional algorithms or discriminative AIs you will, or should, get the same output.  The answers remain stable.  Generative AI being repeatedly fed the same input will produce different results (hence the “regenerate” button in ChatGPT).  This phenomenon becomes more disconcerting for us when we realize that the differentiations are based on the existence of randomness that exists in generative AI.  The responses that we get from a generative AI are probabilistic and not deterministic (again cutting at our “right answer”).  When we enter a prompt and regenerate multiple outputs, there is not a priority of “correctness” but a response that emerges from reexamining the information from the input prompt.

In Higher Education, we have accepted and highlighted generative learning ranging from project-based learning to dissertations.  We have not been as comfortable doing so in our administrative practices.  Even when we say we are going to “think outside the box,” we are still thinking from the perspective of “a box.”  We are no longer in a position to find the right answer, but to generate new possibilities and assess them.  Generative AI does not give us answers; it gives us suggestions.

We live in a world of many inputs producing many outputs and their interactions produce emergent relationships.  The categorical nature of discriminative AI gives us academic disciplines, functional offices, and existing structures.  Our world is generative, not discriminative.  There are no clean lines between entities as in complex adaptive systems, it is not the individual agents (vortices) that have an impact, but the connections and interactions between them (edges).  The elements, both agents and connections change over time—as William of Ockham noted, the past itself changes as it is continuously being added to.  As the amount of data increases and becomes added to AI training sets, the answers evolve.  There is no singular formula for success in any discipline.

Generative thinking, like generative AI, is here to stay and will change how we think and how we operate.  Think about your Fibbonaci spiral; think for XXXIX seconds and start to generate.

Content Disclaimer

Related Articles