Samsung Bans ChatGPT After Engineers Use it to Fix Proprietary Code

Employees who continue to use ChatGPT will face ‘disciplinary action up to and including termination of employment,’ according to a staff memo

Samsung is banning ChatGPT and other generative AI tools on company devices after at least three employees used it to troubleshoot proprietary code and summarize internal meeting notes.

In a memo sent out Monday, the South Korea-based company notified staff of its decision to “temporarily restrict the use of generative AI,” Bloomberg reports. In addition to ChatGPT, that could include competitor chatbots like Google Bard and Microsoft Bing AI.

“Interest in generative AI platforms such as ChatGPT has been growing internally and externally,” Samsung says. “While this interest focuses on the usefulness and efficiency of these platforms, there are also growing concerns about security risks presented by generative AI.”

Samsung executives are concerned ChatGPT will store internal data and not give it the option to delete it before the AI chatbot spits it out in responses to users around the world, the memo says. Employees who continue to use ChatGPT will face “disciplinary action up to and including termination of employment,” Samsung says.

n the meantime, Samsung says it is creating in-house tools to replace common uses of ChatGPT, such as translation, summarizing notes, and fixing buggy code. The new “incognito” mode that OpenAI added last week, Reuters reports, is clearly not sufficient.

“HQ is reviewing security measures to create a secure environment for safely using generative AI to enhance employees’ productivity and efficiency,” the memo says.

Other large companies have banned or restricted the use of ChatGPT. In March, executives in Walmart’s software engineering branch issued a ChatGPT warning along with a set of internal usage guidelines, the US Sun reports(Opens in a new window).

Verizon and large banks like JP Morgan have done the same, according(Opens in a new window) to The Wall Street Journal.

ChatGPT has attracted millions of users, with particular interest from software engineers who use it to write simple programs and reduce time spent identifying and fixing issues in code. Savvy users have even figured out how to use it to create entire websites(Opens in a new window).

Recognizing this large user base, Google recently updated Bard with the ability to program in over 20 languages.

See more here pcmag.com

Header image: NurPhoto

Please Donate Below To Support Our Ongoing Work To Defend The Scientific Method

PRINCIPIA SCIENTIFIC INTERNATIONAL, legally registered in the UK as a company incorporated for charitable purposes. Head Office: 27 Old Gloucester Street, London WC1N 3AX. 

Trackback from your site.

Comments (1)

  • Avatar

    Howdy

    |

    I’m surprised Samsung allow employees to run anything of such a security risk in the first place. It says something about safety and security mindedness.
    Better late than never, but maybe they need to audit and reassess which software sources they trust to even be given access to machines, even though there is no such thing as real trust in software, or firmware.

    ““HQ is reviewing security measures to create a secure environment for safely using generative AI to enhance employees’ productivity and efficiency,” the memo says.”
    It is lines of code subject to tampering, it cannot be secure. It certainly isn’t trustworthy in a corporate environment.

    Reply

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via