This fall, Harvard students returned to a campus that felt both digital and delightfully old-school: think in-person exams, handwritten assignments, and classrooms where laptops were firmly off-limits. These weren’t simply nostalgic gestures; they were part of Harvard’s comprehensive strategy to tackle a pivotal challenge sweeping through higher education: the pervasive influence of artificial intelligence (AI) in academic pursuits. With tools like ChatGPT now capable of generating essays, summarizing complex texts, writing code, and even crafting research papers instantly, faculty members are being forced to fundamentally re-evaluate how students learn, complete their coursework, and ultimately demonstrate true understanding and mastery.

AI Everywhere: A Campus in Transformation
AI has rapidly woven itself into the fabric of Harvard life. According to a 2025 survey of the Faculty of Arts and Sciences conducted by The Crimson, almost 80% of instructors reported encountering student work they suspected was produced by AI – a staggering increase from just two years prior. Despite this widespread use, faculty confidence in identifying AI-generated content remains remarkably low, with only 14% feeling “very confident” in their ability to differentiate between human and AI-authored work. This challenge is further highlighted by research from Pennsylvania State University, which found that humans correctly detect AI-generated text only about 53% of the time, barely better than a coin flip.
The rapid growth of AI has compelled faculty to reconsider not just individual assignments, but the very methodologies of teaching. Harvard instructors are now walking a fine line: integrating AI to enrich the learning experience while rigorously upholding academic integrity and nurturing essential critical thinking skills.
A Flexible, Faculty-Led Approach
Unlike some institutions that have imposed outright bans on AI tools, Harvard has consciously opted against a universal, restrictive policy. While submitting AI-generated work without proper acknowledgment remains a violation of the University’s Honor Code, individual faculty members are given broad autonomy in deciding how AI use is managed within their specific courses.
In 2023, the Faculty of Arts and Sciences introduced three distinct draft AI policies – ranging from highly restrictive to fully permissive, alongside a balanced middle-ground option. This empowered instructors to select the approach best suited for their educational objectives. By fall 2025, nearly all of the twenty most popular undergraduate courses had established clear AI policies, a stark contrast to the complete absence of such guidelines in 2022, as reported by The Crimson.
Dean of Undergraduate Education, Amanda Claybaugh, articulated the guiding principle behind this strategy: “AI is an incredibly powerful tool, but its true value lies in the hands of someone who possesses the knowledge to critically evaluate its output – and that requires the ability to perform the work independently first. We must ensure our students acquire this fundamental understanding.”
Diverging Paths: Restricting vs. Embracing AI
Faculty responses to AI integration vary significantly across Harvard’s diverse academic disciplines.
AI-Proofing Assignments
Some professors have chosen to entirely minimize AI’s impact on their assessments. History professor Jesse Hoffnung-Garskof, for instance, replaced traditional final research papers with oral examinations, directly addressing the ease with which large language models can produce written work. Similarly, Physics professor Matthew Schwartz transitioned from take-home finals to in-person, timed exams, emphasizing memorization and direct problem-solving skills.
In the humanities, a prevalent concern among some faculty is that an over-reliance on AI could diminish the intellectual rigor and profound human element inherent to their fields. English professor Deidre Lynch notably cautioned, “To give AI a central role in education, particularly in the humanities, feels like a fundamental denial of what makes us human.”
Harnessing AI
Conversely, many instructors actively encourage students to leverage AI as an advanced learning partner. Computer Science 50, Harvard’s famously popular introductory course, provides a custom chatbot designed to assist with coding queries. Economics 1010a introduced a specialized AI assistant for the course, while students in East Asian studies utilize AI for translating ancient texts, then engaging in classroom discussions to deepen their contextual understanding.
Statistics lecturer James Xenakis observed that AI significantly accelerates research by rapidly processing complex datasets, but he emphasized the crucial necessity for students to still grasp the core underlying concepts themselves. Peter K. Bol, a distinguished professor of East Asian Languages and Civilizations, incorporates weekly AI exercises that blend translation with thoughtful follow-up questions. “Everyone is going off and doing something slightly different, and they get exposed to each other’s ideas,” Bol remarked, highlighting AI’s potential to cultivate dynamic, collaborative learning environments.
Preparing Students for an AI-Driven World
Harvard’s leadership firmly believes that mastering responsible AI usage is a vital skill for the future. Dean David J. Deming, addressing freshmen during Convocation, reminded them that young, educated individuals are already among the most prolific users of AI. “You possess the creativity and open-mindedness to discover the most effective ways to utilize it,” he stated, underscoring the imperative of thoughtful AI application.
The Bok Center for Teaching and Learning has been instrumental in supporting faculty, developing specialized AI chatbots for courses, crafting innovative AI-resilient assignments, and conducting workshops focused on integrating AI into pedagogical practices. Faculty requests are increasingly leaning towards specific, tailored AI tools – such as AI for debugging code or transcribing oral exams – rather than generic, catch-all assistants.
Balancing Ethics, Learning, and Time Pressures
While the emergence of AI naturally brings concerns about academic dishonesty, many faculty members suggest that its adoption by students often stems more from heavy workload pressures than from a lack of dedication. Professor Hoffnung-Garskof pointed out that most Harvard students “do not rely on AI to produce work superior to what they could achieve themselves – they are too deeply committed to their own high standards.”
AI has also spurred a crucial re-evaluation of teaching objectives. In-person examinations, oral assessments, and AI-resilient assignments are strategically designed to evaluate not just factual knowledge, but also critical thinking, creativity, and sophisticated problem-solving – the very competencies students will require to thrive in an AI-dominated world.
Harvard’s AI-Resilient Future
Three years after the groundbreaking debut of ChatGPT, Harvard’s strategic response to AI expertly balances judicious caution with forward-thinking opportunity. By integrating AI into the learning ecosystem while simultaneously implementing carefully structured, AI-resilient assessments, the university is empowering its students to think critically, innovate creatively, and harness technology with profound effectiveness. The ultimate objective is clear: in the new era of artificial intelligence, true mastery now encompasses a deep understanding of both the subject matter and the transformative tools that will shape it.
This article is based on reporting by The Harvard Crimson.