The lawsuit, filed in U.S. District Court in Portland, Oregon, by the Federation of State Humanities Councils and the Oregon Council for the Humanities, names DOGE, its acting administrator, Amy Gleason, and the NEH among the defendants.
The plaintiffs ask the court to “stop this imminent threat to our nation’s historic and critical support of the humanities by restoring funding appropriated by Congress.” It notes the “disruption and attempted destruction, spearheaded by DOGE,” of a partnership between the state and the federal government to support the humanities.
The lawsuit, filed Thursday, maintains that DOGE and the National Endowment for the Humanities exceeded their authority in terminating funding mandated by Congress.
Today, US District Judge Colleen McMahon ruled the cancellation of $100 million dollars in grants was unconstitutional. You can read the entire decision in PDF form here.
The reason I’m linking to this is regarding the use of ChatGPT which is interesting to me because case-law around these LLMs being used is still developing and we’re finally starting to see some precedent form.
Some choice quotes outlined by The Verge:
Fox testified that he used ChatGPT “[t]o highlight why [a] grant may relate to DEI” and “to pull out anything related to DEI.” To do so, he submitted each cursory grant description from the NEH spreadsheet to ChatGPT using a standardized prompt: “Does the following relate at all to DEI? Respond factually in less than 120 characters. Begin with ‘Yes.’ or ‘No.’ followed by a brief explanation.” Fox testified that he did not define “DEI” for ChatGPT and that he did not have the slightest idea how ChatGPT understood the term.
+
After being deployed from DOGE to NEH, Justin Fox used search terms, which he labeled as “Detection Codes,” to identify grants that he dubbed the “Craziest Grants” and “Other Bad Grants.” The search terms included, among other terms, “BIPOC (Black, Indigenous, People of Color),” “Minorities,” “Native,” “Tribal,” “Indigenous,” “Immigrant,” “LGBTQ,” “Homosexual,” and “Gay.” When Fox was asked whether he “r[a]n this list of words through every grant description” he received from NEH, he confirmed, “yes.” In this way, Fox constructed and applied explicit classifications based on protected characteristics and used them as the operative criteria for revoking federal grants.
+
There is no distinction to be drawn here between the Government and ChatGPT. ChatGPT was the Government’s chosen instrument for purposes of this project, and DOGE’s use of AI to identify DEI-related material neither excuses presumptively unconstitutional conduct nor gives the Government carte blanche to engage in it. …There is not a scintilla of evidence that Fox or Cavanaugh, having obtained a “DEI” rationale from ChatGPT, undertook any meaningful review of whether that rationale made sense.
I’ve been teaching AI classes to some fortune 50 companies focusing on the topic of Business Writing. It’s been a blast to teach and I’m enjoying the feedback loop from learners and seeing how they’re navigating the complexity of utilizing LLMs and frankly, also being human and using their own minds. Fascinating stuff. One thing I mentioned just last week in my class was how important it is to validate the LLM understands what you mean when giving instructions. When I say “make it concise”, an LLM has an idea of what that means but does it align with my understanding? How do I know? Well, I can ask.
So when the defendant claims that he did not define DEI, that’s a big problem. You’re relying on the model explicitly and while LLMs can be helpful, you have to be specific and prescriptive if you want it done the way you would like. A good way to assist is asking it “How would you define DEI?” then further, after I’ve provided some feedback, have it spot reason on a dozen or so to ensure there are zero false positives. If there are any false positives, you have to go back to the prompt engineering phase and not complete your business activity.
So when someone responsible for cancelling $100,000,000 worth of federal grants fails to do that, they seriously screwed up.
Further and that last statement is one I find very interesting to see a federal judge rule on. ChatGPT is not liable nor can it be considered a scapegoat of this. The operator of the tool is in control. Just like “guns don’t kill people” logic has widely been applied (although recently there are cases where firearms manufacturers are being open to lawsuits), ChatGPT did not face the blame of these grants being pulled. The operator is responsible. Which is logic I agree with even in some of the recent cases around people using ChatGPT for self-help situations. We are operating a tool and we are responsible for its outputs and how we use those.
I just think this ruling is interesting and we’ll see where it goes next to inform other cases.