Home » The Impact Of AI On Society: Paying More Attention

The Impact Of AI On Society: Paying More Attention

by OnverZe
1 comment

The internet has been split into several sectors ever since interest in artificial intelligence (AI) began to grow. While some people think artificial intelligence (AI) is the way of the future and could lead to a better existence, others are skeptical and have been debating the potential drawbacks of this new technology. Even earlier this year, tech luminaries like Elon Musk signed an open letter advocating for a ban on the development of AI.

Furthermore, OpenAI, the business behind ChatGPT, the popular AI tool, has made progress in guaranteeing AI safety. A recent blog post on OpenAI’s website states that the organization has delegated decision-making authority over AI to its board. This comes after Sam Altman, the CEO of OpenAI, was sacked by the previous board and unexpectedly returned to the role. Altman sacked every former board member and installed new ones after regaining his position as CEO.

The Impact Of AI On Society: Paying More Attention

OpenAI takes steps to prevent harmful AI

OpenAI said in a blog post titled “Preparedness” that they would form a “dedicated team to oversee technical work and an operational structure for safety decision-making.”

In addition, the position states that the Preparedness team will “manage technical work to investigate frontier model capabilities, conduct assessments, and compile reports.” To support OpenAI’s decision-making process for safe model development and deployment, this technical work is essential. A cross-functional Safety Advisory Group is being established to examine all findings and forward them to the Board of Directors and Leadership at the same time.”

The blog post goes on to say, “While Leadership is the decision-maker, the Board of Directors holds the right to reverse decisions,” highlighting the Board of Directors’ supreme authority.

The Preparedness Framework

The Preparedness Framework (Beta) describes the following strategy, per the blog post:

Ongoing Assessment: Frequent reviews and modifications employing “scorecards” for every frontier model, pushing models to their limits to appraise hazards and gauge the efficacy of countermeasures.

Danger Thresholds: Specific danger thresholds that, in the cases of cybersecurity, CBRN threats, persuasion, and model autonomy, initiate safety procedures. Models that don’t meet certain safety requirements can’t be used or developed further.

Dedicated Oversight: Technical work and an operational framework for safety decision-making are managed by a dedicated team. A Safety Advisory Group evaluates reports and offers guidance to the Board of Directors and Leadership.

The following are the protocols for safety and accountability: frequent safety drills, the ability to respond quickly to critical situations, external audits by certified third parties, red-teaming, and coordination between internal and external teams to monitor and manage safety hazards.

Risk reduction involves working with outside partners, internal groups, and cutting-edge studies to gauge how hazards change as models get bigger. ongoing initiatives to uncover new “unknown unknowns.”

You may also like

1 comment

Leave a Comment