OpenAI updates policy on “military and warfare”

OpenAI updates policy on “military and warfare”
  • An update to its policy made on 10th January shows OpenAI has seemingly changed its stance when it comes to using its technology for military and warfare applications.

  • This change in wording potentially points to military agencies taking a deeper interest in the potential of AI moving forward.

  • The company has still warned against the use of its technology for weapons.

  • Following some instability last year thanks to a leadership shake-up, it seems like operations at OpenAI are seemingly back to normal.


    This as the company launched its online GPT store for developers last week, but in a rather more concerning development, OpenAI has updated its policy that has many worried its technology could be used by military agencies down the line.


    As spotted by The Intercept (subscriber wall), the company updated its usage policies on 10th January.


    The change that has everyone’s attention revolves around the company’s Universal Policies. More specifically it outlines what is prohibited in terms of using OpenAI technology to develop weapons.


    “Don’t use our service to harm yourself or others – for example, don’t use our services to promote suicide or self-harm, develop or use weapons, injure others or destroy property, or engage in unauthorized activities that violate the security of any service or system,” it explains.


    As Engadget points out, there is a crucial omission here, as it has removed the mention of “military and warfare” with regards to its Universal Policies.


    We are yet to officially see Large Language Models (LLMs) being ..

    Support the originator by clicking the read the rest link below.