Nearly half of U.S. states have no specific AI policy for K-12 classrooms — meaning millions of students are operating under rules their districts invented themselves.
NYC Releases AI School Guidelines — and Parents Are Already Calling Them a Risk to Students
New York City's Department of Education has released official AI usage guidelines for its public schools, and the backlash was immediate. Parents and educators say the rules don't go far enough to protect student data or prevent academic integrity abuses — a pointed critique given that NYC runs the largest school district in the country.
The guidelines permit AI as a brainstorming and revision tool, but critics say the line between "assistance" and "generation" is left dangerously undefined, giving teachers and students little concrete guidance on where legitimate help ends and academic dishonesty begins. The pushback is especially charged given that just weeks ago parents reportedly shut down a school board meeting for seven hours demanding a moratorium on AI in schools — and the city pressed forward anyway.
For parents, the immediate question is practical: if the guidelines allow AI tools in classrooms, which vendors have been vetted, what student data do those tools collect, and under what rules? Federal law — specifically FERPA, which governs student education records, and COPPA, which requires parental consent before collecting data from children under 13 — sets a minimum floor. Whether NYC's new framework clears that floor, critics say, is far from obvious.
Gobble's Take: "Official guidelines" that leave the hard questions unanswered aren't a policy — they're a liability dressed up as one.
Sources: Let's Data Science · New York Post
A Stamford Student Turned In Her Own Work. Two AI Detectors Said Otherwise.
A high school student in Stamford, Connecticut recently received a message from her teacher: she could not receive a passing grade on an assignment because "more than one platform detected 90% or more AI use." She hadn't used AI.
What followed, as her parent documented in detail, was a confusing process of conversations with the teacher, school administration, district technology leadership, and the central office. What those conversations revealed wasn't a rogue teacher — it was a system with no shared understanding of the rules. Stamford Public Schools' own district-level guidance explicitly states that AI detection tools should not be used as the primary basis for determining academic integrity or assigning grades, and acknowledges that these tools are "emerging and not always reliable." That guidance didn't reach the classroom. The detectors were treated as decisive.
This is the gap that no policy document has closed yet: the distance between what a district writes and what actually happens when a teacher opens a detection tool and sees a number. A student accused under these circumstances must explain their writing process, defend their work, and navigate a dispute system whose rules — as the parent noted — often don't exist in any clearly defined form. That's a significant burden to place on a teenager whose only mistake was writing in a way a flawed algorithm didn't recognize as human.
Gobble's Take: If your school uses AI detectors, ask your principal one question: what happens when the detector is wrong?
Source: Inside Stamford
At Least 25 States Have Acted on AI in Schools. The Other Half Haven't.
As of spring 2026, at least 25 states have enacted or introduced legislation specifically addressing AI use in K-12 education. That number, drawn from a comprehensive state-by-state tracker, sounds like progress — until you flip it over and realize it means roughly half the country has no specific guidance in place at all.
For the states that have acted, the work spans several distinct areas: student data privacy rules that go beyond the federal baseline set by FERPA and COPPA, academic integrity frameworks that define what counts as permitted AI use, AI literacy mandates that require students to be taught how these tools work, and teacher professional development requirements. California's AB 2071, introduced in March 2026, is the most recent high-profile example — a bill the state calls the Digital Wellness Education Act. A growing number of states are also mandating that AI literacy be embedded directly into K-12 curricula, treating it as a foundational skill rather than an elective topic.
For parents and teachers in states that haven't moved yet, the practical reality is that local school districts are writing their own rules — or not writing them at all. An EdWeek Research Center survey found that more than 79% of U.S. educators say their districts lack clear policies on AI tools like ChatGPT, even as students use them daily. The result is a patchwork where the protections your child has, and the expectations they're held to, depend almost entirely on their zip code.
Gobble's Take: If your state hasn't issued AI guidance, your school district is making policy by improvisation — ask them to show you what they've got in writing.
Source: AI Laws by State
Schools Tried Banning AI. Now They're Teaching Students to Disclose It.
In 2022, most schools had no AI policy. By 2023, most had a prohibition policy. By 2024, most of those prohibitions had been quietly revised. That rapid arc — from silence to ban to something more complicated — is now the defining pattern in how schools are handling generative AI, and it's worth understanding why the bans failed before the next round of policy debates begins.
The core problem with total prohibition, according to a practical guide on AI and academic integrity, is that it's unenforceable and educationally self-defeating at the same time. Detection tools aren't reliable enough to catch AI use consistently, which means prohibition creates an uneven playing field: students who know how to use AI without triggering a detector get an advantage over students who follow the rules honestly. Meanwhile, a blanket ban cuts students off from tools they'll be expected to use after graduation. Complete openness has the opposite flaw — if AI produces the writing, the student hasn't developed the capacity the assignment was designed to build, and that gap will show up later.
The approach most schools that updated their policies since 2023 are converging on is called structured integration: AI use permitted for specific purposes, with required disclosure. Under this framework, a student who uses AI to brainstorm topic ideas is in a different category than a student who uses it to generate a finished draft. Teaching students to articulate that difference — clearly, in writing, as part of the assignment — is increasingly treated not just as an academic integrity measure but as a literacy skill in its own right.
Gobble's Take: The schools ahead of this curve aren't asking "did you use AI?" — they're asking "can you explain exactly how, and what you did with it after?"
Sources: OpenEduCat · AI School Librarian
Quick Hits
- 79% of teachers say their district has no clear AI policy: An EdWeek Research Center survey found most U.S. educators are navigating AI tool decisions without formal district guidance — leaving individual teachers to set the rules in their own classrooms. The School House
- AI policies are heavy on compliance, thin on curriculum: A review of school AI policies finds most focus narrowly on safeguarding and data protection while offering teachers almost nothing on how to integrate AI into actual instruction, curriculum design, or assessment. Carl Hendrick
In Case You Missed It
Yesterday's top stories:
Related reads
Other Gobbles stories on similar themes.
Fewer Than Half of Schools Have Written an AI Policy. The Rest Are Winging It.
87% of Schools Have AI. Only 1 in 4 Have Any Rules for It.
U.S. Department of Education Finally Responds to Parents' Warnings on AI Family Risks
Your District's AI Policy Might Have Been Written by an AI Chatbot in Five Minutes
Was this briefing useful?
One tap helps Gobbles learn what to cover more carefully.
Get AI Schools Watch in your inbox
Free daily briefing. No spam. Unsubscribe anytime.
