NIST Introduces GenAI

The U.S. Commerce Department’s National Institute of Standards and Technology (NIST), renowned for developing and evaluating technology for governmental, corporate, and public applications, has unveiled a groundbreaking NIST GenAI initiative. This program studies generative AI technologies, including text and image generation.

NIST GenAI is set to introduce standardized benchmarks, facilitate the creation of detection mechanisms for verifying content authenticity (such as deepfake detection systems), and foster the advancement of software designed to identify the origins of fabricated or deceptive AI-generated content. NIST has elaborated on these objectives through the newly launched NIST GenAI website and a press release.

“The NIST GenAI program will issue a series of challenge problems [intended] to evaluate and measure the capabilities and limitations of generative AI technologies. These evaluations will be used to identify strategies to promote information integrity and guide the safe and responsible use of digital content.” the press release reads.

Read More: FlowGPT: Where GenAI Creativity Flourishes Without Limits

The inaugural project of NIST GenAI focuses on initiating a pilot study to develop robust systems capable of distinguishing between human-authored and AI-generated media, commencing with textual content. Despite numerous services claiming to detect deepfakes, both studies and our internal evaluations have revealed their unreliability, particularly in text.

NIST GenAI is inviting teams from academia, industry, and research laboratories to participate by submitting either “generators” – AI systems tasked with content generation—or “discriminators,” systems engineered to recognize AI-generated content.

In this study, generators are tasked with producing summaries of 250 words or fewer based on a given topic and a set of documents, while discriminators must ascertain whether a provided summary is potentially AI-generated. To ensure impartiality, NIST GenAI will furnish the requisite data for testing the generators. Notably, systems trained solely on publicly available data and those that fail to “[adhere] to pertinent laws and regulations” will not be considered eligible, as outlined by NIST.

Registration for the pilot program will commence on May 1, marking the beginning of a two-round process scheduled to conclude on August 2. The outcomes of this study are anticipated to be disclosed in February 2025.

The launch of NIST GenAI and its emphasis on combating deepfake arrive amidst a significant surge in the proliferation of AI-generated misinformation and disinformation.

As reported by Clarity, a firm specializing in deepfake detection, the production and dissemination of deepfakes have surged by 900% compared to last year. This exponential increase has understandably raised concerns. A recent survey conducted by YouGov revealed that 85% of Americans harbor apprehensions regarding the spread of deceptive deepfakes across online platforms.

The introduction of NIST GenAI aligns with NIST’s response to President Joe Biden’s executive order on AI, which mandates enhanced transparency from AI companies regarding the inner workings of their models. The executive order also establishes a set of new standards, including guidelines for labeling content generated by AI.

This marks NIST’s inaugural AI-related announcement following the appointment of Paul Christiano, a former researcher at OpenAI, to lead the agency’s AI Safety Institute.

Christiano’s appointment has sparked controversy due to his “doomerist” perspectives; he once predicted a “50% chance AI development could end in [humanity’s destruction].” Critics, purportedly including scientists within NIST, express concerns that Christiano might steer the AI Safety Institute towards focusing on speculative scenarios rather than addressing more immediate, tangible risks associated with AI.

NIST asserts that NIST GenAI will be pivotal in the AI Safety Institute’s endeavors.

Read More: Redefining Healthcare Through Generative AI Innovation