I like this. Related, this semester I've been using handwritten quizzes in class. A simple change that's been one of the best things as it changed students' expectations of class prep. Kind of do the readings and sort of prep and you can coast in class. But if you need to write out quiz answers you're forced to know the material better as well as maintain the ability to express yourself.
I also use low-point bonus questions to test general knowledge (huge variation on subjects I thought everyone knew).
When I did my Computer Science degree the vast majority of courses were 50% final, 30% midterm - even programming exams were hand written, proctored by TAs in class or in the gymnasium - assignments/labs/projects were a small part of your grade but if you didn’t do them the likelihood you’d pass the term exams was pretty darn low.
I personally dislike placing a heavy emphasis on exams. Assignments/projects have been consistently the most enjoyable and rewarding parts of the courses I've taken so far in university.
It's a shame that they are also way more susceptible to cheating with AI.
I went to college as a MechE so unsure if compsci was different. But overall, all the “fun” projects were labs. We have three semesters of hell and all 3 semesters had 2-3 labs, and we write 20 pages or so for EACH lab a week (usually a team of 2-3).
Also way more susceptible to cheating in traditional non-AI ways. And your mark ends up depending a lot on how much time you have to invest independent of how good you are at the course material.
Assignments and projects are great for learning, but suck for evaluation.
Part of the purpose for evaluation is to provide feedback. I'm not going to claim that the form of feedback is great, but it does offer motivation to improve.
The other thing that feedback feeds into is credentials. I realize that some people are dismissive of this aspect of the degree, but it is important to pursue further studies or secure a job. While you can argue that these people are only cheating themselves, and some of them are cheating themselves, a great many will continue to cheat as they advance in academia or the workforce. In other words, they are cheating others out of opportunities.
That is the traditional view, the view of those who want to improve their own knowledge and abilities, and presumably the view of those who would like to consider the degree to be a meaningful credential.
However I suspect that there are many who 1) are more concerned about the short term outcome, 2) consider the degree/diploma to be little more than a meal ticket or arbitrary gatekeeping without any connection to learning, 3) view the work as a pointless barrier to being handed said diploma, and/or 4) don't see the value of human learning in a world where jobs are done by AI and AI systems routinely outperform humans on complex tasks.
Then I suppose we can go back to having computer labs that can only access white listed domains and other study materials. Students code there to ensure no cheating.
The labs I was in weren't connected to the Internet at all, only a local intranet. Though, they were all running pre-oracle solaris if memory serves, so I'm probably dating myself a bit.
Reading all these comments, I feel like US universities are a joke.
I had to do all the exams in person. 100% of the grade was decided at the exam. Millions of people graduated this way and they are fine. No students were harmed in the process.
My school couldn't afford typewriters in the 1980's and early 1990's.
We wrote assignments by hand using a pencil or pen.
Is that really complicated?
When I got to college and everything had to be typed I still wrote everything by hand on paper and edited with an eraser and a red pen to reorganize some sentences or paragraphs. Then I would go to the computer lab and type it in and print it out.
What's interesting is that as I understand, folks are using things like Google Docs for papers, and that it's (apparently) straight forward to do analysis on a Google Doc to see, well, the life of the document. How it was typed in, how fast, what was pasted and cut back out.
My understanding is that the Google Doc is not a word processing document, it's an event recording of a word processor. So, in theory, you could just "play back" watching the document being typed in and built to "see" how it was done.
I only mention this because given the AIs, I'm sure even with a typewriter, it's more efficient to have the AI do the work, and then just "type it in" to the typewriter, which kind of invalidates the entire purpose of it in the first place.
The typing in part is inevitable. May as well have a "perfect first draft" to type it in from in the first place.
And we won't mention the old retro interfaces that let you plug in a IBM Selectric as a printer for your computer. (My favorite was a bunch of solenoids mounted above the keys -- functional, but, boy, what a hack.)
TaaS -- Typing as a service. Send us your Markdown file and receive a typed up, double spaced copy via express shipping the next day!
That's certainly one way to abstractly automate a task: Just pay someone else to do it. (This is a concept that regular people employ every day in the real world.)
Another way to automate this particular task is that some typewriters have (serial/parallel) ports to connect to a computer. It's not a daunting task at all for a student who is skilled in the art of using the bot to have one of these typewrites be the output target.
Even Microsoft Word stores revision history inside .docx files, and that’s been used to expose plagiarism. I heard about one case where a student took an existing paper (I believe from a previous year/student) and pasted it into Word. They then edited it just enough to make it look different.
However, they didn’t remove the embedded revision history in the .docx file they submitted, so that went about as well as you can expect.
Hmm, I have some old daisy-wheel printers in the closet that I've been meaning to strip down for stepper motors, maybe I should refurb them instead :-)
You should look up the history of the Loebner Prize [1]. There’s a shocking amount of technological development in some chatbots that went toward simulating mistakes and typing patterns to make them seem more human-like.
In some of the later Loebner competitions, when text was transmitted to the human character by character, the bot would even simulate typos followed by backspacing on screen to make it look more realistic.
Yeah I definitely think LLMs contributed to its demise. To be honest, nobody in academic AI circles took it very seriously, because it kind of devolved into a contest over who could create the most convincing illusion of intelligence.
Participants spent more time polishing up the natural language parsing aspects in conjunction with pre‑programming elaborate backstories for their chatbot's bios among other psychological tricks. In the end, the whole competition was more impressive as a social engineering exercise, since the real goal kinda became: how can I trick people into thinking my chatbot is a human?
But reading through some of the previous competition chatbot transcripts still makes for fascinating reading.
Why are people promoting the idea that exams are not written or given in person anymore? I graduated relatively recently and maybe had 1 take home exam during my entire education. Every other exam was proctored in person and written. The professor who made the take home exam also made it much more difficult than a normal exam so I would not really say it was easier than a normal in person test.
Take home exams were very common when I was in school, which was before you could get answers on the internet. After internet answer and cheating sites came along, a professor would have to either not care and let cheating run rampant, or struggle to constantly make unique new kinds of take home questions somehow. AI has basically killed that option too.
I loved take home exams because they allowed me to study before hand but not have the insane pressure and condensed studying required for exams in the classroom. Even though they were normally much harder and longer, I liked them. I felt I learned much more through them because I could take the time to understand concepts I had missed without feeling the time pressure of in-person exams.
It's a shame that humans find a way to cheat ourselves out of things that benefit us by over "optimizing" the wrong things.
Exams in classroom with all the time pressure is also an important part of education. May be they should be low percentage of grade to prevent too much stress but it's am important learning experience
I'd like to see some data on this. My general-ed recall is minimal, and in programming before school, I certainly learned a ton more by coding than by testing. That's my perception of my time in school, as well.
I disagree. Take home exams represent how work and progress occurs in the "real" world. There's nothing in the post education world that resembles in-person exams.
Maybe the medical profession is a counter example.
I used to make my classes 60-80% project work, 40-80% quizzes all online.
I now do 50% project work, 50% in person quizzes, pencil on paper on page of notes.
I'm increasingly going to paper-driven workflows as well, becoming an expert with the department printer, printing computer science papers for students to read and annotate in class, etc.
Ironically, the traditional bureaucratic lag in university might actually help: we still have a lot of infrastructure for this sort of thing, and university degrees may actually signal competence-beyond-ai-prompting in the future.
I always preferred the "you get some grades along the way to gauge your progress but the lion's share of the weight went to the proctored exams" method unless the lion's share of the normal work was also proctored anyways (at which point it doesn't really matter how it's done).
The reason was less for myself and more because anything group related suddenly shot up in quality when the other individual work classmates were graded on couldn't be fudged.
The things I don’t like about putting too much weight in the exams are:
* It’s sort of unnecessarily high stakes for the students; a couple hours to determine your grade for many hours of studying.
* It’s pretty artificial in general; in “real life” you have the ability to go around online and look for sources. This puts a pretty low ceiling on the level of complexity you can actually throw at them.
I think it's all about speed. In "real life" everything can be looked up, but exam optimizes to not even having to look it up. Then any research becomes much faster.
Whether it's good or bad I don't know, I think US higher education focuses too much on ability to produce huge amounts of mediocre work, but that's the idea behind exams.
One of the reasons I've always encouraged software people to learn to touch type has nothing to do with typing speed - it's about reducing/eliminating the cognitive load of typing, you want to be thinking in expressions (sentences) not letters. (The increase in effectiveness comes from not getting distracted by the mechanics of typing...)
Exams happen all the time in real life. Or rather, situations where you can't just look up fundamental knowledge. Job interviews, presentations, even mundane work tasks - all these require you to know the basics quickly "The basics" are relative, of course, but I often point out to my students: "you don't care if your doctor needs to look up the specific interactions of your various meds. You do care if you see them googling 'what is an appendix'." Proctored, in-person exams are the only reliable mechanism we have for ascertaining if a specific individual has mastered key fundamentals and can answer relevant questions about them in a relatively timely fashion. Everything else is details and thresholds - how fast do you need to be able to recall, how deep, what details are fundamental. From there, I think it's fine to hate poorly made exams, and it's a given that many folks making exams have no idea what they're doing (or don't have the resources to do it right). But the premise of an exam is not completely divorced from reality.
High stakes artificial exams can help prepare you for artificial stakes at job interviews where you need to crank out a working solution in 30 mins with jet lag and someone looking over your shoulder
That's true. They do better-prepare an applicant for a job that filters on a person's ability to accomplish arbitrary things in a vacuum that is completely disconnected from the real world.
That's probably a good thing to filter on for, say, the navigation role on all kinds of crafts (from land to sea to space). There are naval roles where navigating with a sextant and memory is an important skill to have, and to test for.
But that operating-in-a-vacuum skill doesn't relate well to roles that don't need to exist in a vacuum. In most of the jobs in the real world, we get to use tools -- and when the tools go out to lunch, we don't revert to the Old Ways.
When an accountant's computer dies, they don't transition back to written arithmetic and paper ledgers. Instead, someone who fixes computers gets it going again, and they get back to work as soon as that's done.
Obviously they're both supposed to be proxy measures, not realistic scenarios. I was mostly joking before but I do think exams provide a pretty good proxy for ability in the subject if the teacher is decent. Interviews not so much unless the applicant is similarly prepared with foreknowledge of what they will be tested on and had some time to prepare and given recent practice.
This is where the alternative of a course with the other (still monitored for graded activities) option comes in. The downside of that tends to force in person synchronous rather than custom scheduling of regular tests.
The point is more about whether the graded work is actively reviewed than which individual choice is ideal or not though. Whether it's electronic or written, remote or in person, weighted towards exams vs continuous are all orthogonal debates to the problem of cheating/falsely claiming work.
I had attended a few courses over a decade ago and just completed a degree recently. The methods of cheating have changed, but not because of pencils vs keyboards.
In real life you need to know the options and their trade-offs to solve a given problem. You don't need to know all the techniques perfectly, but you do need to be able to characterize them and compare them, from rote memory.
I agree, I think many people who rail against exams underestimate how important memory is to more complicated skills. How can you debug a complex application if you have to keep looking up every operator and keyword in the language you're using? It'd be like trying to interpret poetry in a foreign language but you have to look up every single noun. I'm not saying people can't do it, but it's tedious, slow, and you probably wouldn't think of them as a "professional worth paying for their service". Some amount of memorization is key.
So at 50%, someone who uses AI to get 100% of the homework grade will earn a D (sometimes passing) if they can get at least a 20% on your quizzes, and a C (always passing) if they get at least a 40%. Did you make your exam so difficult that students who truly didn't learn the material earn less than 20-40%? Because if it was, say, multiple choice questions with four possible answers, then you can expect them to earn at least 25% just by chance.
While that answers their direct question, they do bring up a good point -- how often are you handing out less than 25% scores on exams? Id imagine any professor to do that to get some severe criticism that would make even a cheater pretty livid
It does! That’s why you can ask to be evaluated by a commission of professors.
If you don’t pass after 3 tries, commission is mandatory.
You also have a paper trail of written exams and midterms to back you up. If you keep getting good grades and failing the oral, people will find that obviously suspicious.
Honestly the only times I had any trouble in the orals were the exams where I baaaaarely passed the written. Usually oral feels like the chill easy part compared to written because you can have a back-n-forth with the professor.
> It does! That’s why you can ask to be evaluated by a commission of professors.
Still concerning from a statistical/psych fairness aspect.
There's a famous example of the Boston Symphony trying to fairly judge unseen applicants in 1952, and their results kept getting gender-skewed until they adjusted for the fact judges were reacting to the sound of shoes (e.g. high heels) when the candidate moved around behind the divider.
If you don't get one job you should have - there are others - it's unfortunate but not life altering.
If 3 years into your marine biology program a professor who always teaches a mandatory course fails you because you're a woman who wears non traditional dress - you're not graduating and now there are no jobs. (And this is an example that actually happened to someone I know - not in a western country)
If students cheat they hurt only themselves. Make sure they understand the consequences for cheating (missing out on learning) and that's about all you can do.
Depends on your measuring stick. Cheating themselves out of an education? Yep. Cheating themselves into a credential -> job - the status / remuneration of which is almost entirely divorced from the quality of the education, being aligned rather with the name of the organization on the diploma.
Former (second-generation) college professor, here. I find it almost impossible to be cynical enough about the US education industry.
The thing is, when colleges don't test students' ability properly before issuing a credential, employers start testing job applicants' ability after they've received it.
And they'll do it with all the 'unnecessarily high stakes' and 'risk of unconscious bias' and 'not truly representative' problems that written exams have; and a bunch of extra problems too.
This is untrue. Students who graduate without actually absorbing knowledge as laid out in the curriculum devalue the degree when they show up in the workforce lacking that knowledge. This is part of why new grads are undesirable job candidates, there’s a chance you are paying a higher wage for someone who may not have learned anything.
When i attended university (almost a decade ago i guess, time flies) we didn't have a single exam on the computer. All exams were on paper or oral, most were without notes too. Computer science does not require computers.
This is usually true, but it is also true that some classes are graded "on a curve" and so grade inflation could hurt people who are honestly doing work. Also, cheaters tend to suck all the air out of a room. For example, my I.T. instructor designed a really nice oral quiz slide-show for the entire classroom. I found it a few hours before the class, I watched it in its entirety, and then when he tried to run it live, I spoilered all the answers before any other student could answer. I wasn't strictly cheating, but I wasn't being fair to my classmates' learning process, either.
I had a typewriter growing up and I remember thinking it was the coolest thing. I was amazed by it and tried writing several stories. Eventually my dad bought me a crappy old computer that was only really good for writing, and that was cool too. I loved that thing. It was small too, with an integrated monitor and keyboard, so it didn't take over the whole desk where I still used pencil and paper often
Imagine being able to do some writing without notifications going off every few seconds, and where you're not always one click away from a search engine and some website scientifically designed to drag your attention down a rabbit hole and keep it there
There's an entire industry of "distraction free writing devices" based mostly on that nostalgia/yearning (not to say that it isn't effective, but the effectiveness is not actually being measured :-)
I like open note exams (and perhaps open book exams, as you need to know the book well to know which page to look at) - it forces you to condense the material to the salient points and operationalise it to solve what would be more challenging problems than a simple recall exam.
When I see 'cheat sheets' - designed to be hidden on the back of calculators or whatever - then I see true application of human ingenuity and intellect.
If AI can do the work, maybe the test should be more focused on what AI can’t do? This is like anyone still doing a traditional coding interview with leetcode problems just because they haven’t yet done the work to figure out what to test for in a world where Claude Code exists.
Gyms are a great example actually because tractors exist to do the economically useful work. You now optionally go to the gym to benefit from fake labor that used to be the side effect of useful work. The fake labor is now what colleges are trying to sell, and it's going to kill them.
3,000 years ago, physical labor was a component of most jobs. Today gyms are for people who can afford to attend them and don't have a day job that naturally exercises them through labor. People exercising purely for health benefits, and not because the strength benefits them in their job and in other facets of their life, is new.
Huh? The gym analogy doesn’t even make sense. People didn’t go to gyms when they were farming with oxen. Gyms are popular now precisely because tractors exist and you don’t need manual labor to farm anymore but people still need the physical exercise for their health. Society has adapted to the arrival of new life-changing technology. Our education system needs to adapt to new technology like AI too. You can probably uplevel a lot of courses and cover a lot more interesting topics than before and teach real application of things you learned aided by AI. Just like when I was doing a CS major 20 years ago, they didn’t spend too much time teaching me assembly programming beyond 1 or 2 lectures (they let me use a compiler for programming assignments!).
Maybe instead of trying to teach around the abacus, we need to teach the higher level things you can reach with MATLAB.
We're doing these students a major disservice making them live in the old world. It's our fault for being inflexible, but their world is going to be wholly different and we should just embrace that.
One consequence of LLM fraud at scale making remote/online tests & document submission worthless is it might act as a giant revitalizing boost for the bricks-and-mortars school systems. Suddenly having real teachers and students in room together has value again, for credibility and authenticity alone.
LLMs are also making having a public repo code portfolio be much more worthless as a sign of legitimacy
I’m confused about too many things being measured at once. Is Phelps banning AI to ensure her students are fit to pass terminal examination? And doing so to ensure that her class has a good pass rate, proving she is a good teacher and can keep her job? What if her cohort are particularly dumb? Is she incentivized to make it easy to pass her classes to get that A you paid so much for? Or hard or make that A worth something?
My mentor, a PhD in classics, told me it was never about outcomes and only about improvement. I suppose that answers my question. If your AI gets you an A at the start of the course and an A at the end, then, in the sense that you have not succeeded over anything, you have failed.
My impression was she just brings the typewriters into class as a one-day novelty thing per course, not that it becomes the norm for the whole semester. The goal is to give the students a taste of what the old-fashioned way is like, to get them thinking about it.
This will only work until somebody figures out how to connect an AI to the typewriter which will have some sort of MIC, and the person will start dictating into it with AI-assisted revisions. Once the dictation is over, the AI-enabled typewriter will be instructed to type the work out.
Testing and instruction should be modified to account for AI. If a student uses an Agentic AI for work, learning, research, then when test time comes, the student should be required to stand in the front of the class and teach the class what they have learned, i.e. "Teach Back" all they learned to the entire class student body and teacher. The entire class, instructor included, will also be required to participate in a Q&A session to make sure that student's learning is not just made up of memorization, e.g. restate the information learned but using different words, different scenarios, etc.
That makes sense. The CX-2 calculators are a bit less like the iPad era and more like the equivalent of calc I/II classes which only let you use specific TI models versus an app on your smartphone.
It reminds me of a family friend who's a bit older and did their scuba certification using dive tables, whereas when I did my PADI, I was able to use a dive computer.
Might be an unpopular opinion in this thread, but college was made worthless for most degrees as soon as the internet got popular and silly performative shit like this is the death knell. College is about learning how to work in an industry. I'd predict an uptick in trade schools and other hands-on work like medicine, and a continuing downturn in so-called formal education for anything white-collar, programming included. Students are customers. Businesses are going to use AI going forward. No reason to waste time on this.
Education is a nice side effect sometimes but yeah, I don't know how you could reach any other conclusion. If you're motivated to learn for learning's sake, college is an annoying slog that you know you don't need post-millenium. I literally left college early and started making money instead of spending it, because I got tired of demonstrating to my professors that I already knew everything they were teaching and that it'd be a waste of time for me to come to class.
Or maybe you chose to waste your time because you treated college as a way to get a piece paper instead of as the only time in your life when you are surrounded by experts who will spend an hour a week answering any questions you can think of.
No time wasted at all, that option is also trivially available outside of college, it's called "email". There's a whole industry in tricking new adults into believing that college is not about getting a piece of paper, it's gross, and it's avoidable. I paid off a year of unnecessary college debt in 1/4 of a year of doing real work I learned how to do in my free time. It's a trap and articles like this where colleges are working as hard they can to make education less useful prove it.
I like this. Related, this semester I've been using handwritten quizzes in class. A simple change that's been one of the best things as it changed students' expectations of class prep. Kind of do the readings and sort of prep and you can coast in class. But if you need to write out quiz answers you're forced to know the material better as well as maintain the ability to express yourself.
I also use low-point bonus questions to test general knowledge (huge variation on subjects I thought everyone knew).
When I did my Computer Science degree the vast majority of courses were 50% final, 30% midterm - even programming exams were hand written, proctored by TAs in class or in the gymnasium - assignments/labs/projects were a small part of your grade but if you didn’t do them the likelihood you’d pass the term exams was pretty darn low.
We already had AI proof education.
I personally dislike placing a heavy emphasis on exams. Assignments/projects have been consistently the most enjoyable and rewarding parts of the courses I've taken so far in university.
It's a shame that they are also way more susceptible to cheating with AI.
I went to college as a MechE so unsure if compsci was different. But overall, all the “fun” projects were labs. We have three semesters of hell and all 3 semesters had 2-3 labs, and we write 20 pages or so for EACH lab a week (usually a team of 2-3).
Also way more susceptible to cheating in traditional non-AI ways. And your mark ends up depending a lot on how much time you have to invest independent of how good you are at the course material.
Assignments and projects are great for learning, but suck for evaluation.
I really appreciated classes where there was rapidly demising returns to time spent :)
Another example, lit classes where the grade is based on time limited, open book exams, hand written in "blue books"
Read the book, pay attention in class, spend 90 min writing an essay, and you are done.
is evaluation that important? ultimately if you can't do the work you're only cheating yourself in the long run...
Part of the purpose for evaluation is to provide feedback. I'm not going to claim that the form of feedback is great, but it does offer motivation to improve.
The other thing that feedback feeds into is credentials. I realize that some people are dismissive of this aspect of the degree, but it is important to pursue further studies or secure a job. While you can argue that these people are only cheating themselves, and some of them are cheating themselves, a great many will continue to cheat as they advance in academia or the workforce. In other words, they are cheating others out of opportunities.
That is the traditional view, the view of those who want to improve their own knowledge and abilities, and presumably the view of those who would like to consider the degree to be a meaningful credential.
However I suspect that there are many who 1) are more concerned about the short term outcome, 2) consider the degree/diploma to be little more than a meal ticket or arbitrary gatekeeping without any connection to learning, 3) view the work as a pointless barrier to being handed said diploma, and/or 4) don't see the value of human learning in a world where jobs are done by AI and AI systems routinely outperform humans on complex tasks.
Yes. I care that the work I've done and what I've learned is actually good and correct. Vibes-based learning/anything is valueless.
Then I suppose we can go back to having computer labs that can only access white listed domains and other study materials. Students code there to ensure no cheating.
The labs I was in weren't connected to the Internet at all, only a local intranet. Though, they were all running pre-oracle solaris if memory serves, so I'm probably dating myself a bit.
Today just teachers walking around during an exam instead of browsing on their phone would do wonders…
Writing programs by hand is something I had to do too. Compete waste of time
Reading all these comments, I feel like US universities are a joke.
I had to do all the exams in person. 100% of the grade was decided at the exam. Millions of people graduated this way and they are fine. No students were harmed in the process.
No projects, no labs, no teamwork, no papers?
What a narrow set of skills to send into your economy.
Given the way things are going, not knowing how to use AI will be like coming out not knowing about revision control
Isn't the selling point of AI that it does it for you? What's to learn?
I think you’re missing the /s.
Did you never have to write a research paper?
My school couldn't afford typewriters in the 1980's and early 1990's.
We wrote assignments by hand using a pencil or pen.
Is that really complicated?
When I got to college and everything had to be typed I still wrote everything by hand on paper and edited with an eraser and a red pen to reorganize some sentences or paragraphs. Then I would go to the computer lab and type it in and print it out.
What's interesting is that as I understand, folks are using things like Google Docs for papers, and that it's (apparently) straight forward to do analysis on a Google Doc to see, well, the life of the document. How it was typed in, how fast, what was pasted and cut back out.
My understanding is that the Google Doc is not a word processing document, it's an event recording of a word processor. So, in theory, you could just "play back" watching the document being typed in and built to "see" how it was done.
I only mention this because given the AIs, I'm sure even with a typewriter, it's more efficient to have the AI do the work, and then just "type it in" to the typewriter, which kind of invalidates the entire purpose of it in the first place.
The typing in part is inevitable. May as well have a "perfect first draft" to type it in from in the first place.
And we won't mention the old retro interfaces that let you plug in a IBM Selectric as a printer for your computer. (My favorite was a bunch of solenoids mounted above the keys -- functional, but, boy, what a hack.)
TaaS -- Typing as a service. Send us your Markdown file and receive a typed up, double spaced copy via express shipping the next day!
Typing as a service is a whole cottage industry on Etsy.
That's certainly one way to abstractly automate a task: Just pay someone else to do it. (This is a concept that regular people employ every day in the real world.)
Another way to automate this particular task is that some typewriters have (serial/parallel) ports to connect to a computer. It's not a daunting task at all for a student who is skilled in the art of using the bot to have one of these typewrites be the output target.
Like this: https://chatgpt.com/share/69e405db-1b44-83ea-baf3-6af41fe577...
Even Microsoft Word stores revision history inside .docx files, and that’s been used to expose plagiarism. I heard about one case where a student took an existing paper (I believe from a previous year/student) and pasted it into Word. They then edited it just enough to make it look different.
However, they didn’t remove the embedded revision history in the .docx file they submitted, so that went about as well as you can expect.
Hmm, I have some old daisy-wheel printers in the closet that I've been meaning to strip down for stepper motors, maybe I should refurb them instead :-)
In general I love the idea of turning printers into typewriters. I've been thinking about how to do it with an inkjet printer.
arms race....
oh look there is a llm trained on key loggers to spew slop at your personally predicted error rate; bonus if it identifies to USB as keyboard.
You should look up the history of the Loebner Prize [1]. There’s a shocking amount of technological development in some chatbots that went toward simulating mistakes and typing patterns to make them seem more human-like.
In some of the later Loebner competitions, when text was transmitted to the human character by character, the bot would even simulate typos followed by backspacing on screen to make it look more realistic.
https://en.wikipedia.org/wiki/Loebner_Prize
Wow it feels like the Loebner prize went away right at the dawn of the LLM. Is it correlated?
Yeah I definitely think LLMs contributed to its demise. To be honest, nobody in academic AI circles took it very seriously, because it kind of devolved into a contest over who could create the most convincing illusion of intelligence.
Participants spent more time polishing up the natural language parsing aspects in conjunction with pre‑programming elaborate backstories for their chatbot's bios among other psychological tricks. In the end, the whole competition was more impressive as a social engineering exercise, since the real goal kinda became: how can I trick people into thinking my chatbot is a human?
But reading through some of the previous competition chatbot transcripts still makes for fascinating reading.
Goodhart's Law vs the Turing Test! Can our humans accurately evaluate intelligence, or will they be fooled by fakes? Live this Sunday!
I think it would be great to be revived with a different premise.
Why are people promoting the idea that exams are not written or given in person anymore? I graduated relatively recently and maybe had 1 take home exam during my entire education. Every other exam was proctored in person and written. The professor who made the take home exam also made it much more difficult than a normal exam so I would not really say it was easier than a normal in person test.
Take home exams were very common when I was in school, which was before you could get answers on the internet. After internet answer and cheating sites came along, a professor would have to either not care and let cheating run rampant, or struggle to constantly make unique new kinds of take home questions somehow. AI has basically killed that option too.
Did you by any chance graduate before the COVID-19 pandemic?
I loved take home exams because they allowed me to study before hand but not have the insane pressure and condensed studying required for exams in the classroom. Even though they were normally much harder and longer, I liked them. I felt I learned much more through them because I could take the time to understand concepts I had missed without feeling the time pressure of in-person exams.
It's a shame that humans find a way to cheat ourselves out of things that benefit us by over "optimizing" the wrong things.
Exams in classroom with all the time pressure is also an important part of education. May be they should be low percentage of grade to prevent too much stress but it's am important learning experience
I'd like to see some data on this. My general-ed recall is minimal, and in programming before school, I certainly learned a ton more by coding than by testing. That's my perception of my time in school, as well.
I disagree. Take home exams represent how work and progress occurs in the "real" world. There's nothing in the post education world that resembles in-person exams.
Maybe the medical profession is a counter example.
I used to make my classes 60-80% project work, 40-80% quizzes all online.
I now do 50% project work, 50% in person quizzes, pencil on paper on page of notes.
I'm increasingly going to paper-driven workflows as well, becoming an expert with the department printer, printing computer science papers for students to read and annotate in class, etc.
Ironically, the traditional bureaucratic lag in university might actually help: we still have a lot of infrastructure for this sort of thing, and university degrees may actually signal competence-beyond-ai-prompting in the future.
We'll see.
I always preferred the "you get some grades along the way to gauge your progress but the lion's share of the weight went to the proctored exams" method unless the lion's share of the normal work was also proctored anyways (at which point it doesn't really matter how it's done).
The reason was less for myself and more because anything group related suddenly shot up in quality when the other individual work classmates were graded on couldn't be fudged.
The things I don’t like about putting too much weight in the exams are:
* It’s sort of unnecessarily high stakes for the students; a couple hours to determine your grade for many hours of studying.
* It’s pretty artificial in general; in “real life” you have the ability to go around online and look for sources. This puts a pretty low ceiling on the level of complexity you can actually throw at them.
I think it's all about speed. In "real life" everything can be looked up, but exam optimizes to not even having to look it up. Then any research becomes much faster.
Whether it's good or bad I don't know, I think US higher education focuses too much on ability to produce huge amounts of mediocre work, but that's the idea behind exams.
One of the reasons I've always encouraged software people to learn to touch type has nothing to do with typing speed - it's about reducing/eliminating the cognitive load of typing, you want to be thinking in expressions (sentences) not letters. (The increase in effectiveness comes from not getting distracted by the mechanics of typing...)
Exams happen all the time in real life. Or rather, situations where you can't just look up fundamental knowledge. Job interviews, presentations, even mundane work tasks - all these require you to know the basics quickly "The basics" are relative, of course, but I often point out to my students: "you don't care if your doctor needs to look up the specific interactions of your various meds. You do care if you see them googling 'what is an appendix'." Proctored, in-person exams are the only reliable mechanism we have for ascertaining if a specific individual has mastered key fundamentals and can answer relevant questions about them in a relatively timely fashion. Everything else is details and thresholds - how fast do you need to be able to recall, how deep, what details are fundamental. From there, I think it's fine to hate poorly made exams, and it's a given that many folks making exams have no idea what they're doing (or don't have the resources to do it right). But the premise of an exam is not completely divorced from reality.
High stakes artificial exams can help prepare you for artificial stakes at job interviews where you need to crank out a working solution in 30 mins with jet lag and someone looking over your shoulder
That's true. They do better-prepare an applicant for a job that filters on a person's ability to accomplish arbitrary things in a vacuum that is completely disconnected from the real world.
That's probably a good thing to filter on for, say, the navigation role on all kinds of crafts (from land to sea to space). There are naval roles where navigating with a sextant and memory is an important skill to have, and to test for.
But that operating-in-a-vacuum skill doesn't relate well to roles that don't need to exist in a vacuum. In most of the jobs in the real world, we get to use tools -- and when the tools go out to lunch, we don't revert to the Old Ways.
When an accountant's computer dies, they don't transition back to written arithmetic and paper ledgers. Instead, someone who fixes computers gets it going again, and they get back to work as soon as that's done.
Obviously they're both supposed to be proxy measures, not realistic scenarios. I was mostly joking before but I do think exams provide a pretty good proxy for ability in the subject if the teacher is decent. Interviews not so much unless the applicant is similarly prepared with foreknowledge of what they will be tested on and had some time to prepare and given recent practice.
This is where the alternative of a course with the other (still monitored for graded activities) option comes in. The downside of that tends to force in person synchronous rather than custom scheduling of regular tests.
The point is more about whether the graded work is actively reviewed than which individual choice is ideal or not though. Whether it's electronic or written, remote or in person, weighted towards exams vs continuous are all orthogonal debates to the problem of cheating/falsely claiming work.
I had attended a few courses over a decade ago and just completed a degree recently. The methods of cheating have changed, but not because of pencils vs keyboards.
In real life you need to know the options and their trade-offs to solve a given problem. You don't need to know all the techniques perfectly, but you do need to be able to characterize them and compare them, from rote memory.
I agree, I think many people who rail against exams underestimate how important memory is to more complicated skills. How can you debug a complex application if you have to keep looking up every operator and keyword in the language you're using? It'd be like trying to interpret poetry in a foreign language but you have to look up every single noun. I'm not saying people can't do it, but it's tedious, slow, and you probably wouldn't think of them as a "professional worth paying for their service". Some amount of memorization is key.
So at 50%, someone who uses AI to get 100% of the homework grade will earn a D (sometimes passing) if they can get at least a 20% on your quizzes, and a C (always passing) if they get at least a 40%. Did you make your exam so difficult that students who truly didn't learn the material earn less than 20-40%? Because if it was, say, multiple choice questions with four possible answers, then you can expect them to earn at least 25% just by chance.
My quizzes are written responses, psuedocode and annotating code.
While that answers their direct question, they do bring up a good point -- how often are you handing out less than 25% scores on exams? Id imagine any professor to do that to get some severe criticism that would make even a cheater pretty livid
The last point is very interesting and might keep universities relevant.
When I was in college, your grade fully depended on the oral exam/debate with the professor. Everything else was but the entry ticket.
Not sure anyone even attempted to cheat in that scenario. And the conversations were usually great, although very stressful for us cramming types
This sounds extremely susceptible to unconscious bias, or even just straightforward discrimination.
It does! That’s why you can ask to be evaluated by a commission of professors.
If you don’t pass after 3 tries, commission is mandatory.
You also have a paper trail of written exams and midterms to back you up. If you keep getting good grades and failing the oral, people will find that obviously suspicious.
Honestly the only times I had any trouble in the orals were the exams where I baaaaarely passed the written. Usually oral feels like the chill easy part compared to written because you can have a back-n-forth with the professor.
> It does! That’s why you can ask to be evaluated by a commission of professors.
Still concerning from a statistical/psych fairness aspect.
There's a famous example of the Boston Symphony trying to fairly judge unseen applicants in 1952, and their results kept getting gender-skewed until they adjusted for the fact judges were reacting to the sound of shoes (e.g. high heels) when the candidate moved around behind the divider.
Moreso than a job interview?
More systematic than a job interview.
If you don't get one job you should have - there are others - it's unfortunate but not life altering.
If 3 years into your marine biology program a professor who always teaches a mandatory course fails you because you're a woman who wears non traditional dress - you're not graduating and now there are no jobs. (And this is an example that actually happened to someone I know - not in a western country)
A hand-written essay in class would seem to be a workable mechanism for a student to demonstrate an ability to reason on their own about a subject.
One of my best college professors would review such essays in-person, one-on-one twice each semester.
I think if your university doesn't do in person exams with pen and paper then the degrees it hands out are not much evidence of anything.
If you're not interested in learning the course content, then what are you doing there? Pretty expensive waste of time.
I very fondly recall many of the course I did at university. The exams were a helpful motivating factor even for the interesting courses.
Better dust off that old AlphaSmart!
If students cheat they hurt only themselves. Make sure they understand the consequences for cheating (missing out on learning) and that's about all you can do.
Depends on your measuring stick. Cheating themselves out of an education? Yep. Cheating themselves into a credential -> job - the status / remuneration of which is almost entirely divorced from the quality of the education, being aligned rather with the name of the organization on the diploma.
Former (second-generation) college professor, here. I find it almost impossible to be cynical enough about the US education industry.
The fact that it's an industry is alone enough to cry.
Well from a certain perspective they are also hurting the schools reputation, the programs reputation, and ultimately their fellow students.
> If students cheat they hurt only themselves
This statement is more defensible after removing “only”. If it “only” hurt the cheaters, there would be no need to police cheating at all.
The thing is, when colleges don't test students' ability properly before issuing a credential, employers start testing job applicants' ability after they've received it.
And they'll do it with all the 'unnecessarily high stakes' and 'risk of unconscious bias' and 'not truly representative' problems that written exams have; and a bunch of extra problems too.
This is untrue. Students who graduate without actually absorbing knowledge as laid out in the curriculum devalue the degree when they show up in the workforce lacking that knowledge. This is part of why new grads are undesirable job candidates, there’s a chance you are paying a higher wage for someone who may not have learned anything.
They hurt other students who worked hard for the degree. They hurt the reputation of the school and the utility of the degree as a credential.
When i attended university (almost a decade ago i guess, time flies) we didn't have a single exam on the computer. All exams were on paper or oral, most were without notes too. Computer science does not require computers.
This is usually true, but it is also true that some classes are graded "on a curve" and so grade inflation could hurt people who are honestly doing work. Also, cheaters tend to suck all the air out of a room. For example, my I.T. instructor designed a really nice oral quiz slide-show for the entire classroom. I found it a few hours before the class, I watched it in its entirety, and then when he tried to run it live, I spoilered all the answers before any other student could answer. I wasn't strictly cheating, but I wasn't being fair to my classmates' learning process, either.
I had a typewriter growing up and I remember thinking it was the coolest thing. I was amazed by it and tried writing several stories. Eventually my dad bought me a crappy old computer that was only really good for writing, and that was cool too. I loved that thing. It was small too, with an integrated monitor and keyboard, so it didn't take over the whole desk where I still used pencil and paper often
Imagine being able to do some writing without notifications going off every few seconds, and where you're not always one click away from a search engine and some website scientifically designed to drag your attention down a rabbit hole and keep it there
There's an entire industry of "distraction free writing devices" based mostly on that nostalgia/yearning (not to say that it isn't effective, but the effectiveness is not actually being measured :-)
I have an old MacBook Air I flashed with writerdeckOS [0]. Feels like a digital typewriter.
[0]: https://writerdeckos.com/
I like open note exams (and perhaps open book exams, as you need to know the book well to know which page to look at) - it forces you to condense the material to the salient points and operationalise it to solve what would be more challenging problems than a simple recall exam.
When I see 'cheat sheets' - designed to be hidden on the back of calculators or whatever - then I see true application of human ingenuity and intellect.
If AI can do the work, maybe the test should be more focused on what AI can’t do? This is like anyone still doing a traditional coding interview with leetcode problems just because they haven’t yet done the work to figure out what to test for in a world where Claude Code exists.
The goal of the educational process isn't the test paper, it's the learning.
Gyms aren't redundant because tractors exist.
Gyms are a great example actually because tractors exist to do the economically useful work. You now optionally go to the gym to benefit from fake labor that used to be the side effect of useful work. The fake labor is now what colleges are trying to sell, and it's going to kill them.
Gyms predate tractors.
3,000 years ago, physical labor was a component of most jobs. Today gyms are for people who can afford to attend them and don't have a day job that naturally exercises them through labor. People exercising purely for health benefits, and not because the strength benefits them in their job and in other facets of their life, is new.
Huh? The gym analogy doesn’t even make sense. People didn’t go to gyms when they were farming with oxen. Gyms are popular now precisely because tractors exist and you don’t need manual labor to farm anymore but people still need the physical exercise for their health. Society has adapted to the arrival of new life-changing technology. Our education system needs to adapt to new technology like AI too. You can probably uplevel a lot of courses and cover a lot more interesting topics than before and teach real application of things you learned aided by AI. Just like when I was doing a CS major 20 years ago, they didn’t spend too much time teaching me assembly programming beyond 1 or 2 lectures (they let me use a compiler for programming assignments!).
Gyms predate tractors by a couple of thousand years. You should think harder about the analogy.
There are plenty of things AI can do that students still benefit from learning.
This is like saying you shouldn't learn to add because we have calculators.
Maybe instead of trying to teach around the abacus, we need to teach the higher level things you can reach with MATLAB.
We're doing these students a major disservice making them live in the old world. It's our fault for being inflexible, but their world is going to be wholly different and we should just embrace that.
One consequence of LLM fraud at scale making remote/online tests & document submission worthless is it might act as a giant revitalizing boost for the bricks-and-mortars school systems. Suddenly having real teachers and students in room together has value again, for credibility and authenticity alone.
LLMs are also making having a public repo code portfolio be much more worthless as a sign of legitimacy
I’m confused about too many things being measured at once. Is Phelps banning AI to ensure her students are fit to pass terminal examination? And doing so to ensure that her class has a good pass rate, proving she is a good teacher and can keep her job? What if her cohort are particularly dumb? Is she incentivized to make it easy to pass her classes to get that A you paid so much for? Or hard or make that A worth something?
My mentor, a PhD in classics, told me it was never about outcomes and only about improvement. I suppose that answers my question. If your AI gets you an A at the start of the course and an A at the end, then, in the sense that you have not succeeded over anything, you have failed.
My impression was she just brings the typewriters into class as a one-day novelty thing per course, not that it becomes the norm for the whole semester. The goal is to give the students a taste of what the old-fashioned way is like, to get them thinking about it.
This will only work until somebody figures out how to connect an AI to the typewriter which will have some sort of MIC, and the person will start dictating into it with AI-assisted revisions. Once the dictation is over, the AI-enabled typewriter will be instructed to type the work out.
Testing and instruction should be modified to account for AI. If a student uses an Agentic AI for work, learning, research, then when test time comes, the student should be required to stand in the front of the class and teach the class what they have learned, i.e. "Teach Back" all they learned to the entire class student body and teacher. The entire class, instructor included, will also be required to participate in a Q&A session to make sure that student's learning is not just made up of memorization, e.g. restate the information learned but using different words, different scenarios, etc.
... meanwhile, all these students graduate, can't find jobs and become plumbers or bricklayers.
Just have them write it out. “Ain’t nobody got a goddamn typewriter”.
Pfft, just grab a teletype and run lpr -P ttyUSB0 ai_generated_report.txt ;-)
Next up: allow slide rules on exams.
Were they ever banned?
Probably around the time they were invented. They were mandatory on my ground exam (private pilot).
OOC was this a while ago? Even when I took the ground exam around 10 years ago, everyone had electronic flight computer calculators (CX-2s).
It was awhile ago (init var me == old;) - back in the era of "iPads can't be used for critical flight information, they're too unreliable".
That makes sense. The CX-2 calculators are a bit less like the iPad era and more like the equivalent of calc I/II classes which only let you use specific TI models versus an app on your smartphone.
It reminds me of a family friend who's a bit older and did their scuba certification using dive tables, whereas when I did my PADI, I was able to use a dive computer.
The college instructor might as well ban calculators and use abacuses then.
We couldn’t use graphing calculators on calculus exams. There were professors who banned calculators entirely.
Might be an unpopular opinion in this thread, but college was made worthless for most degrees as soon as the internet got popular and silly performative shit like this is the death knell. College is about learning how to work in an industry. I'd predict an uptick in trade schools and other hands-on work like medicine, and a continuing downturn in so-called formal education for anything white-collar, programming included. Students are customers. Businesses are going to use AI going forward. No reason to waste time on this.
> College is about learning how to work in an industry.
Oh
Education is a nice side effect sometimes but yeah, I don't know how you could reach any other conclusion. If you're motivated to learn for learning's sake, college is an annoying slog that you know you don't need post-millenium. I literally left college early and started making money instead of spending it, because I got tired of demonstrating to my professors that I already knew everything they were teaching and that it'd be a waste of time for me to come to class.
Or maybe you chose to waste your time because you treated college as a way to get a piece paper instead of as the only time in your life when you are surrounded by experts who will spend an hour a week answering any questions you can think of.
No time wasted at all, that option is also trivially available outside of college, it's called "email". There's a whole industry in tricking new adults into believing that college is not about getting a piece of paper, it's gross, and it's avoidable. I paid off a year of unnecessary college debt in 1/4 of a year of doing real work I learned how to do in my free time. It's a trap and articles like this where colleges are working as hard they can to make education less useful prove it.