Initially, Samsung Semiconductor permitted their fabrication engineers to utilize an artificial intelligence (AI) language model called ChatGPT to aid them in their tasks. However, the engineers began using the tool to swiftly correct errors in their source code, leading to the accidental disclosure of confidential information such as internal meeting notes and sensitive data regarding fab performance and yields.
Now to mitigate this security risk, Samsung Semiconductor intends to develop its AI service akin to ChatGPT solely for internal usage. It has implemented a regulation that limits the length of questions submitted to the service to 1024 bytes. The company proactively addresses the security risks posed by AI language models by creating a secure AI service and instituting measures to prevent the unintentional divulgence of confidential information.
Samsung Semiconductor has encountered three incidents in which the usage of ChatGPT resulted in data breaches, all of which happened in a short period of 20 days. The most concerning of these events involved an employee who submitted the source code of a proprietary program to ChatGPT to fix errors. This action unintentionally exposed the principle of a highly classified application to an external company’s AI tool. Such disclosure could severely affect the company’s intellectual property and overall security.
If these test patterns are optimized, it could result in reduced detection of defective chips, leading to substantial cost savings. These events highlight the risks associated with using AI language models in sensitive settings, emphasizing the need for proper safeguards and guidelines to be in place.
- Advertisement -
Another case involving the utilization of ChatGPT by a Samsung employee is linked to the Naver Clova application. The employee utilized this application to transform a recorded meeting into a document and then submitted it to ChatGPT to prepare a presentation. This situation raises concerns regarding the potential for unauthorized access and data sharing, which could inadvertently disclose confidential and proprietary information. It highlights the significance of implementing robust security measures and guidelines, particularly when utilizing AI language models that may have access to sensitive data.
Using ChatGPT by Samsung Semiconductor workers has resulted in considerable danger to confidential information, prompting Samsung to warn its workforce about the potential risks of using this technology. Samsung Electronics informed its executives and staff that data entered into ChatGPT is transmitted and stored on external servers, creating a significant risk of sensitive data leakage, which is difficult for Samsung to recover.
Although ChatGPT is a powerful tool, its open learning data capability could expose proprietary information to third-party entities. It highlights the importance of protecting confidential data and implementing appropriate measures to control the use of AI language models in sensitive environments.
Samsung is taking steps to prevent any future unauthorized disclosures of confidential information. To this end, the company is working on implementing protective measures that could include blocking access to ChatGPT on the company network in the event of any further breaches, even if emergency information protection measures are in place.
However, despite the risks associated with these tools, generative AI and other AI-assisted electronic design automation tools are crucial for the future of chip production. While it is essential to prioritize data protection, it is equally important to acknowledge the significant advancements these technologies offer and establish appropriate guidelines to ensure their safe and responsible usage. When asked about the information leakage incident, a representative from Samsung Electronics declined to confirm or deny the event. The reason given for the refusal to comment was that the matter was regarded as an internal concern.