

If more public servants and politicians embrace these technologies, practical experience can inform sensible regulations. By supporting responsible experimentation, transparency, and collective learning, it opens the door to realizing the potential of AI to do good in governance. Instead of the usual narrative about AI killing jobs or talking only about AI bias, the city’s letter explains that, by enabling better communication and conversation with residents of all kinds, AI could help repair historical harm to marginalized communities and foster inclusivity.īoston’s generative AI policy sets a new precedent in how governments approach AI.
#3d designer jobs in greater boston how to
These principles represent a shift from fear-mongering about the dangers of AI to a more proactive and responsible approach that provides guidance on how to use AI in the public workforce. The guidelines emphasize that privacy, security, and the public purpose should be prioritized in the use of technology, weighing impact on the environment and constituents' digital rights.

Thus, public servants are encouraged to proof any work developed using generative AI to ensure that hallucinations and mistakes do not creep into what they publish. Still, the policy advocates for a critical approach to the technology and for taking personal responsibility for use of the tools. As a result, even interns and student workers could start to engage in technical projects, such as creating web pages that help to communicate much needed government information.
#3d designer jobs in greater boston code
The Boston policy even explains how AI can help produce code snippets and assist less technical individuals. By promoting greater exploration of how AI can be used to improve government effectiveness and efficiency, and by focusing on how to use AI for governance instead of only how to govern AI, the Boston approach might help to reduce alarmism and focus attention on how to use AI for social good. The “responsible experimentation approach” adopted in Boston-the first policy of its kind in the US-could, if used as a blueprint, revolutionize the public sector’s use of AI across the country and cause a sea change in how governments at every level approach AI.

New York City Schools chancellor David Banks announced yesterday that NYC is reversing its ban because “the knee jerk fear and risk overlooked the potential of generative AI to support students and teachers, as well as the reality that our students are participating in and will work in a world where understanding generative AI is crucial.” And yesterday, City of Boston chief information officer Santiago Garces sent guidelines to every city official encouraging them to start using generative AI “to understand their potential.” The city also turned on use of Google Bard as part of the City of Boston’s enterprise-wide use of Google Workspace so that all public servants have access. In a rapid about-face, however, a few governments are now embracing a less fearful and more hands-on approach to AI. This week, US Congress heard testimony from Sam Altman, CEO of OpenAI, and AI researcher Gary Marcus as it weighed whether and how to regulate the technology. New York City, Los Angeles Unified, Seattle, and Baltimore School Districts either banned or blocked access to generative AI tools, fearing that ChatGPT, Bard, and other content generation sites could tempt students to cheat on assignments, induce rampant plagiarism, and impede critical thinking. After ChatGPT burst on the scene last November, some government officials raced to prohibit its use.
