Edmund Clark is an award-winning leader skilled at leveraging technology and relationships to develop innovative approaches for strategic outcomes. Ed is currently the CIO for the California State University (CSU), the largest university system in the United States, comprising 23 campuses, over 50,000 staff, and more than 470,000 students. Ed’s role is to serve as a strategic partner for the Chancellor’s Executive Team and collaborate with university campus presidents and their chief information officers to catalyze strategic and innovative administrative and academic technology initiatives to advance the mission of the CSU. Ed currently serves on the board of directors for 1EdTech and CENIC (Corporation for Education Network Initiatives in California). He also serves on the technology advisory board for Stratacor/Delta Dental of Minnesota and chairs the Educause Nominations and Leadership committee. Ed holds a doctorate in education from Minnesota State University, Mankato, a master’s in management of technology from the University of Minnesota, and a bachelor’s degree in English from Florida State University.
When technological changes happen swiftly, some groups will move forward while others hesitate, inevitably creating tension.
The same pattern plays out in nature. In fluid dynamics, when a flow speeds up, not everything moves in sync—some parts surge ahead while others lag behind. This mismatch creates shear stress, and the once-smooth motion can spiral into turbulence and chaos. Similarly, in plate tectonics, the Earth’s crust is always shifting, but not all parts move in sync—some plates push forward while others resist, creating mounting stress. When the pressure becomes too much, the pent-up energy is suddenly released in an earthquake.
These examples may serve as metaphors for the current state of adoption of generative artificial intelligence tools, where Open AI’s ChatGPT reached over 100 million users in just two months–four times more quickly than TikTok and fifteen times faster than Instagram.
Industries, consumers, and students have raced to embrace generative AI for all sorts of reasons, from productivity and information gathering to entertainment and creative exploration. Yet many of our educational institutions have been far more hesitant to incorporate these tools into teaching and learning. Their concerns are valid, but their hesitancy to engage is not—for we cannot shape an environment without taking part in it.
AI has ignited conversations about academic integrity, plagiarism, intellectual property, and the evolution of critical thinking. These concerns are natural, even necessary—but they are not new. We’ve traveled this road before.
Lessons from the Past
In Plato’s Phaedrus, written in 370 BCE, Plato worries that the invention of writing will diminish the power of memory and foster an illusion of understanding rather than genuine comprehension.
Over 1,800 years later, the printing press faced similar criticism, with some arguing it would lead to the spread of misinformation, the “vulgarization” of knowledge, and moral decay.
440 years after Gutenberg’s invention, pocket calculators sparked intense debate in classrooms. Many feared they would erode students’ ability to perform calculations, leaving them dependent on machines and unable to estimate or learn from mistakes. In the middle of the decade, this anxiety reached its peak, with some arguing that calculators would diminish fundamental mathematical skills.
Then came the internet. In the 1990s and early 2000s, Google and Wikipedia were seen as threats to traditional research and academic integrity. Would students blindly trust online sources without verifying credibility? Would students copy and paste their assignments instead of writing them? One essay of the time captured the unease: “It is undeniable that the internet has become the single greatest tool for academic dishonesty ever made available to high school and college students.”
Time and again, history has shown us that technology does not erase learning—it reshapes how we deliver it and engage in it. The internet did not destroy academia; it revolutionized research, collaboration, and knowledge-sharing while making us rethink the way we approached education. AI is simply the next chapter in this ongoing evolution.
If we fail to integrate AI literacy into higher education, we will be failing our students. The ability to work alongside AI will soon be as fundamental as digital literacy is today. Therefore, universities can’t rely on simple certificates or basic skills programs to build these levels of understanding. Instead, universities must embed AI training into academic programs, ensuring students develop the critical thinking necessary to evaluate, challenge, and refine AI-generated content. The goal is not to create passive users but informed leaders and innovators.
Furthermore, AI literacy is not just an academic issue; it is a workforce imperative. AI is automating tasks and reshaping entire professions, and today’s employers are looking for graduates who can do more than use AI—they need individuals who understand its ethical implications, limitations, and potential for problem-solving.
Shaping the Future, Not Watching It Pass Us By
Plato once warned that writing would weaken human memory and diminish critical thought. Yet, writing became one of the most powerful tools for knowledge preservation and advancement. These periodic technological disruptions—from the printing press to the internet—have met resistance and caused disruption before innovators and leaders mitigated their harms and realized their potential.
The entire world is boarding a new technology train, one that may be even more transformative than the internet revolution of the 1990s. Yes, AI, like the internet, carries risks—bias, misinformation, and ethical dilemmas. But we cannot choose to stand by and watch from the platform or just get on the train as passengers. We must actively guide AI’s role in education, ensuring students learn not just how to use it, but when to question it.
As educators, we aren’t just the passengers; we are the builders, the explorers, and the navigators. Most importantly, we should be the guides for our students. Our role is to steer this technology in the right direction by using it, writing about it, and criticizing it, thereby ensuring that AI serves as a tool for empowerment rather than a shortcut to complacency. One thing that we cannot afford to do is watch this train leave the station without us—we must be the ones driving it forward.