South Korea's KISA Launches Physical AI Security Standards Project
South Korea's internet security agency KISA launched a new project to create security standards for physical AI systems. The move comes as officials worry cyberattacks could disrupt industrial systems that use AI to control real-world equipment.
South Korea's Korea Internet & Security Agency (KISA) has started a major project to build security standards for physical AI systems. These are AI programs that control real machines and equipment, not just computer software.
The agency is concerned that hackers could target these systems and cause serious problems in factories, transportation, and other key industries. Unlike regular computer hacks that steal information, attacks on physical AI could damage equipment or hurt people.
KISA has been working on AI security for several years. In 2024, the agency ran a program to help companies build better AI security products and published an AI security guide.
The U.S. is also working on similar rules. The National Institute of Standards and Technology is developing security standards for AI systems that can act independently, called agentic AI.
As more industries use AI to control physical equipment, security experts say clear safety rules are essential to prevent accidents and attacks.
Physical AI systems control everything from factory robots to self-driving cars. If hackers break into these systems, they could cause real damage - not just steal data. Strong security rules could prevent accidents and keep these technologies safe.
KISA will develop the security standards and likely publish guidelines for companies using physical AI systems.
Was this article helpful?
0 people found this helpful