Information Security in the Age of Generative AI
In March 2023, an incident at Samsung Electronics, a South Korean corporate giant, sent shockwaves through companies worldwide. Just 20 days after the company authorized the business use of ChatGPT, three serious cases of confidential information leakage were discovered. This case vividly illustrates the critical importance of corporate information security management in the era of generative AI.
The Samsung Electronics Case in Detail
From Authorization to Leakage: A 20-Day Timeline
On March 11, 2023, Samsung Electronics’ semiconductor division (Device Solution Division) authorized employees to use ChatGPT for business purposes. Many employees welcomed this decision and began actively incorporating it into their daily work. As expectations for productivity improvements grew, everyone was attracted to the potential of this new tool.
However, this enthusiasm was short-lived. Just 20 days after authorization, three serious cases of confidential information leakage were discovered. This case was first reported by South Korea’s Economist Korea on March 30 and subsequently received widespread coverage in media outlets around the world.
The Severity of the Leaked Information
The three cases of leaked information that were discovered are as follows:
Semiconductor Equipment Source Code
An employee entered semiconductor equipment source code into ChatGPT to inquire about error resolution strategies. Semiconductor design information represents some of the most critical confidential data, serving as a source of corporate competitiveness. This information may include circuit design details, manufacturing process patent information, and next-generation product development plans.
Yield and Defective Equipment Detection Program
Source code for yield and defective equipment detection programs was uploaded for optimization purposes. Manufacturing equipment control programs are important intellectual property that affects quality control and production efficiency. This code contains proprietary manufacturing know-how and quality control algorithms.
Internal Meeting Audio Data and Minutes
Audio recordings of internal meetings were transcribed and entered into ChatGPT to create meeting minutes. The content may include top-secret information that could affect the company’s future, such as management strategies, M&A plans, and financial information. The external leakage of such information could have a direct impact on corporate value.
Why the Leakage Occurred: Explanation of Technical Mechanisms
How ChatGPT Processes Data
Large Language Models (LLMs) such as ChatGPT send text entered by users to servers for processing. Importantly, this data may be handled in the following ways:
Use as Training Data: Data entered may be used as training data to improve the model.
Server Storage: Data is stored on servers, either temporarily or permanently, for processing purposes.
Risk of Third-Party Disclosure: Depending on the service provider’s data policy, data may be shared with third parties.
Lack of Employee Awareness
Many employees likely perceived ChatGPT as merely a “convenient search engine” or “advanced dictionary.” However, they did not understand that in reality, all information entered is transmitted to external servers and potentially incorporated as AI training data.
Measures Companies Should Take
1. Establishing Comprehensive AI Usage Policies
Companies need to develop clear guidelines for the use of generative AI. These policies should include the following elements:
Clarification of Permissible AI Tools: A list of approved tools and explicit prohibition of unauthorized tools.
Classification of Permissible Information Input: Classification of information according to confidentiality levels and handling regulations for each level.
Restrictions on Usage Scenarios: Clear standards for which business activities AI can and cannot be used for.
2. Implementation of Technical Measures
Adoption of Private LLMs
This involves building LLMs within the company and utilizing AI without going through external servers. This prevents data from leaving the company’s control.
Appropriate API Configuration
When using commercial APIs, appropriate settings regarding data handling must be made. As of March 1, 2023, OpenAI’s API no longer uses input data for training by default. Unless companies explicitly choose to share, data sent through the API is not used for training and is deleted after being retained for a maximum of 30 days. On the other hand, ChatGPT’s web interface operates on an opt-out basis and may be used as training data by default, requiring explicit disabling in settings.
Utilization of Data Masking Technology
By introducing technology that automatically detects and masks or replaces confidential information, the risk of inadvertently entering confidential information can be reduced.
3. Thorough Employee Education
Technical measures alone are insufficient. Continuous education programs must be implemented to ensure that each employee correctly understands the risks of generative AI and can use it appropriately.
Education programs should include the following content:
- Basic mechanisms of generative AI and data flow
- Risks of information leakage and actual case studies
- Safe usage methods and prohibited actions
- Reporting procedures when incidents occur
4. Monitoring and Incident Response Systems
Real-Time Monitoring
Establish a system to monitor employee AI usage and detect inappropriate use at an early stage.
Incident Response Plan
Develop response procedures in advance for when information leakage occurs, and conduct regular drills.
New Risk Management Paradigm in the Generative AI Era
The Threat of Shadow AI
While the concept of “Shadow IT” is widely known, a new threat called “Shadow AI” has emerged in the generative AI era. This refers to AI tools used personally by employees that are not under the control of the IT department.
Characteristics of Shadow AI:
- AI tools available for free or through personal contracts
- Usage practices not understood by the IT department
- Use with inappropriate security settings
- Lack of data governance
Application of Zero Trust Architecture
The zero trust philosophy of “never trust, always verify” is essential for security in the generative AI era. When applied to AI usage:
Monitor All AI Usage: Even for approved tools, monitor usage content.
Principle of Least Privilege: Allow only the minimum necessary information to be input into AI.
Continuous Verification: Conduct regular security assessments and policy reviews.
Regulatory Trends and Future Outlook
Regulatory Developments by Country
Countries around the world are advancing the development of regulations regarding AI usage.
European Union: The AI Act regulates AI systems with a risk-based approach. The regulation came into force on August 1, 2024, and will be applied in phases through 2027.
United States: While no comprehensive federal-level regulation has been enacted, state-level regulations are progressing. President Biden’s Executive Order on AI (October 2023) established federal guidelines for safe AI development and use.
Japan: The Ministry of Internal Affairs and Communications and the Ministry of Economy, Trade and Industry published the “AI Business Guidelines (Version 1.0)” on April 19, 2024, promoting voluntary corporate initiatives as soft law. Updated versions continue to be released to address evolving AI technologies.
China: The “Generative AI Service Management Measures” came into effect in August 2023, requiring registration and compliance for generative AI services.
Establishment of Industry Standards
International standards are being developed, including ISO/IEC 42001 (AI Management System) published in December 2023, which provides a comprehensive framework for organizations to manage AI systems responsibly. ISO/IEC 23053 (Framework for AI Trustworthiness Evaluation), announced in June 2022, provides comprehensive guidance on trustworthiness evaluation of machine learning-based AI systems. Companies are required to build AI usage systems that comply with these standards.
Additionally, the National Institute of Standards and Technology (NIST) released the AI Risk Management Framework (AI RMF 1.0) in January 2023, providing a voluntary framework for managing AI risks. This has become a key reference for organizations worldwide.
Table: Key AI Regulations and Standards Comparison
| Region/Organization | Regulation/Standard | Publication Date | Key Features |
| EU | AI Act | August 2024 (in force) | Risk-based classification; strict requirements for high-risk AI; phased implementation through 2027 |
| United States | Executive Order on AI | October 2023 | Federal guidelines; focuses on safety, security, and trustworthiness |
| Japan | AI Business Guidelines v1.0 | April 2024 | Soft law approach; voluntary compliance; regularly updated |
| China | Generative AI Service Management | August 2023 | Mandatory registration; content control requirements |
| ISO/IEC | ISO/IEC 42001 | December 2023 | AI management system standard; comprehensive organizational framework |
| ISO/IEC | ISO/IEC 23053 | June 2022 | AI trustworthiness evaluation framework |
| NIST | AI Risk Management Framework | January 2023 | Voluntary framework; widely adopted globally |
The Path to Balanced AI Utilization
The Samsung Electronics case clearly demonstrates the dual nature of the powerful tool that is generative AI. While it has the potential to dramatically improve productivity when used appropriately, inappropriate use carries serious risks that could threaten a company’s survival.
What matters is not to completely prohibit AI use. Rather, it is to correctly understand the risks, implement appropriate countermeasures, and actively utilize AI to improve competitiveness. To achieve this, it is necessary to comprehensively implement technical, organizational, and human measures.
The generative AI era has only just begun. The likelihood of new risks becoming apparent in the future is high. However, by learning from Samsung Electronics’ lessons and building advanced risk management systems, companies should be able to safely enjoy the benefits of AI.
Corporate leaders are required to build new governance systems that respond to technological evolution. This is not merely a compliance issue, but a strategic challenge that will determine a company’s competitiveness and sustainability in the digital age.
Best Practices for Secure AI Implementation
Based on lessons learned from the Samsung case and evolving global standards, companies should consider the following best practices:
Implement AI Governance Frameworks: Establish clear roles and responsibilities for AI oversight, including dedicated AI ethics committees and risk assessment teams.
Adopt Privacy-Enhancing Technologies (PETs): Utilize techniques such as differential privacy, federated learning, and homomorphic encryption to protect sensitive data while enabling AI functionality.
Conduct Regular AI Audits: Perform periodic assessments of AI systems for compliance, security vulnerabilities, and alignment with organizational policies.
Establish Clear Data Classification Schemes: Implement a tiered approach to data sensitivity, with explicit rules about which data categories can be processed by which AI systems.
Create Incident Response Playbooks: Develop specific procedures for AI-related security incidents, including data breach notification protocols and stakeholder communication strategies.
Foster a Culture of Responsible AI: Beyond policy compliance, cultivate organizational awareness about ethical AI use and security consciousness through ongoing training and leadership commitment.
The integration of generative AI into business operations represents both an unprecedented opportunity and a complex challenge. Companies that successfully navigate this landscape will be those that approach AI adoption with both ambition and caution, balancing innovation with robust security practices. The Samsung Electronics incident serves as a crucial reminder that in the age of AI, information security is not just an IT concern—it is a fundamental business imperative that requires attention at all levels of the organization.
Comment