MIT’s new AI ‘strategy guide’ provides more questions than answers

MIT SMR Connections, an “independent content creation unit within MIT Sloan Management Review,” recently published an AI “strategy guide” commissioned by Anthology, Inc

Anthology is perhaps not as recognizable as the ubiquitous but reviled learning-management software Blackboard, which it acquired in 2021.

Still, Anthology is one of the major players in the ed-tech sector and is already incorporating AI tools into its products.

Nicolaas Matthijs, Anthology’s chief product officer, claims in the guide that “generative AI has significant potential to improve education.” How will it do that?

To begin with, the guide says that instructors at the University of Michigan who deployed AI “began saving 10 to 12 hours per week on office hours.” The precise savings mentioned are due to AI’s help with grading, creating lecture materials, and automating test-question generation.

These huge time savings may be attractive to instructors, but won’t they mean that AI will eventually replace, or at least displace, some faculty? The guide is mostly silent on this point and dismissive of the danger.

But one can reasonably predict that if professors can accommodate more students because of AI, fewer professors will be needed. This will be especially pronounced as the coming “enrollment cliff” approaches.

For students, the guide promises AI help with personalized learning experiences, such as bespoke study aids. AI will be a 24/7 “electronic study-buddy” that can even speak the student’s native language, an unambiguous benefit if most faculty can’t.

But that is about as specific as the guide gets. More vaguely, it touts immersive simulations, adaptive tutoring systems, and in-class activities, though what these are and how effective they may be remain to be seen.

Curiously, one of the espoused benefits of the AI tutor is that it can “help students quickly understand the framework for a specific course or piece of learning and how it relates to their overall educational journeys or desired career paths. […]

Generative AI can suggest ways for them to connect the dots of the overall curriculum.” Rather than an endorsement of AI, however, this sounds like an indictment of the current state of higher ed. Shouldn’t such reflection and purpose be exclusively the domain of human psychology, which AI totally lacks?

Elsewhere online, Sal Khan of Khan Academy has said that AI can engage in Socratic dialogue with students. This is patently wrongheaded. Since it’s absurd to assume that AI can adequately imitate the complexities of the human experience, this approach assumes that the human experience is reducible to AI.

Furthermore, this notion goes against the new guide’s stated emphasis on “putting humans at the center of the AI-in-education experience” (a no-brainer) and its assertion that “AI should augment and enhance humanity, not replace it.”

A responsible strategy guide would do a better job of ironing out this apparent inconsistency.

The overall tone of the guide is one of inevitability. Its premise is that resistance to the new technology is futile. The authors reason that the calculator is a historical precedent, and if the calculator was inevitable, then so is AI.

This argument doesn’t address desirability, however. If the concern about the calculator was a decline in arithmetical sharpness, and the average student today is worse with numbers than he was before the calculator’s appearance, then the concern was, and is still, warranted.

The guide acknowledges the risks associated with AI, the most salient one being plagiarism and academic integrity. A plausible worst-case scenario could be as follows: Professors let AI prepare their syllabi, lecture materials, and tests, while students let AI write their papers and do their homework.

This scenario would fulfill to a final extent the idea that a degree is merely an expensive credential.

Anthology concedes that there is no way for professors to win a “plagiarism arms race” with AI-detection tools. In light of this danger to academic integrity, Anthology says that “crucial in the defense against plagiarism … is the adoption of authentic assessment.”

Authentic assessment generally eschews traditional exams in favor of the practical application of learned material. While authentic assessment may be a laudable approach in its own right, if AI is capable of “immersive simulations,” as the guide claims, then how long until such simulations provide a better solution to the problem AI has created?

Martin Center contributor Peter Jacobsen previously suggested that the way forward is by looking back to old-fashioned evaluation methods, such as oral exams and handwritten essays.

Such methods are unquestionably human-centric.

See more here jamesgmartin.center

Please Donate Below To Support Our Ongoing Work To Defend The Scientific Method

PRINCIPIA SCIENTIFIC INTERNATIONAL, legally registered in the UK as a company incorporated for charitable purposes. Head Office: 27 Old Gloucester Street, London WC1N 3AX. 

Trackback from your site.

Comments (2)

  • Avatar

    Tom

    |

    I have tried this with Perplexity and quite frankly I am perplexed and quite baffled like all the know everything experts.

    Reply

  • Avatar

    Howdy

    |

    From the strategy guide:
    they must maintain human oversight of all generative AI activities to ensure that the technology is used responsibly.

    develop clear governance and strong policies emphasizing responsible use of the technology while addressing accuracy, fairness, bias, privacy, and other concerns

    As if… Human oversight doesn’t guarantee anything, though when you look at those quotes, you know you are basically dealing with a dodgy character just by using it. So where does the benefit come in if it needs constant monitoring?

    Reply

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via