top of page
Search

US DOE Releases AI Guidance






Educators are advised to collaborate closely with vendors and technology developers to navigate the risks associated with integrating artificial intelligence into schools, according to recent guidance from the U.S. Department of Education released on July 8. Titled "Designing for Education with Artificial Intelligence: An Essential Guide for Developers," the guidance provides comprehensive recommendations tailored for both vendors and school district administrators.


The guidance acknowledges the complex position that companies and tech developers find themselves in regarding AI. While many are eager to innovate, there is a strong emphasis on proceeding cautiously by incorporating educator feedback, ensuring rigorous testing, and avoiding the reinforcement of societal biases or dissemination of inaccurate information.


Developers are eager to meet current market demands and explore new applications for AI without falling behind competitors. Simultaneously, the Department of Education underscores in its guidance that this isn't an either-or scenario. It encourages vendors and educators to innovate responsibly with AI, such as using it to assist teachers in tasks like composing emails. However, critical considerations must be addressed, such as safeguarding student privacy and ensuring that AI applications like early warning systems for identifying at-risk students do not inadvertently perpetuate biases or violate civil rights.


The guidance emphasizes that educators should retain oversight over AI-driven decisions and urges developers to base their tools on evidence-based practices. It underscores the importance of incorporating educator feedback in the design process while prioritizing student data protection and civil rights. This initiative stems from the Biden administration's AI bill of rights and subsequent executive orders, directing the Department of Education to develop AI policy resources for school districts.


While the guidance itself is non-regulatory, intended to influence thinking among educators and developers, it builds upon the department's earlier report on AI in K-12 education released in May 2023. This approach is designed to inform state and district policies without mandating compliance. Jeremy Roschelle, co-executive director of learning science research at Digital Promise, notes that these principles provide a foundation for responsible AI development in education, emphasizing the need to address issues like algorithmic discrimination and privacy concerns promptly.


The report outlines five key recommendations for vendors and educators:

  1. Design products with teaching and learning in mind, ensuring human oversight.

  2. Clearly demonstrate the use of evidence-based principles in product development.

  3. Mitigate bias and algorithmic discrimination.

  4. Protect student privacy rigorously, considering unique AI-related cybersecurity risks.

  5. Maintain transparency in product design and functionality, supporting AI literacy in educational technology.


The guidance reflects extensive input from stakeholders, including students, parents, educators, developers, and industry groups, gathered through public forums and consultations. This collaborative approach aims to foster responsible AI integration in education while promoting equitable learning opportunities and safeguarding student rights and privacy.


The full guidance can be accessed here: https://www2.ed.gov/documents/ai-report/ai-report.pdf

bottom of page