New AI solutions could exacerbate known security risks, ICO warns firms



Threats / New AI solutions could exacerbate known security risks, ICO warns firms

24 May 2019

| Author: Jay Jay


The deployment of modern and state-of-the-art AI solutions (Artificial Intelligence solutions) could exacerbate known security risks and make them more difficult to manage not only due to the increased use of open-source software but also because the creators of AI solutions may not understand broader security compliance requirements, the ICO has warned.

In September last year, a report from Help Net Security revealed that organisations across the globe are now adopting AI and machine learning technologies at a brisk pace, so much so that global spending on cognitive and artificial intelligence systems could touch $77.6 billion (£58.82 billion) in 2022, thrice the amount spent by organisations in 2018.

The report added that organisations are mostly adopting and deploying AI solutions for various uses such as automated customer service agents, automated threat intelligence and prevention systems, sales process recommendation and automation, and automated preventive maintenance.

David Schubmehl, research director, Cognitive/Artificial Intelligence Systems at IDC, said that organisations using AI solutions are benefitting in terms of revenue, profit, and overall leadership in their respective industries and segments and therefore, the market for AI continues to grow at a rapid pace.

AI solutions could introduce fresh security risks

Noting that organisations will adopt AI solutions at a healthy pace in the coming days, Reuben Binns, Research Fellow in Artificial Intelligence at the ICO has, along with other experts, warned that rampant adoption of AI could exacerbate known security risks and make them more difficult to manage as well.

Binns said that AI systems may introduce fresh complexities that are not found in traditional IT systems and running AI solutions smoothly will involve integrating them with other new and existing IT components which are also intricately connected.

Because of such complex integrations, it may be difficult for organisations to identify and manage some security risks and the risk of outages could also increase. At the same time, the people involved in building and deploying AI solutions may not be aware of broader security compliance requirements and this could also result in enhanced security risks.

“It is not possible to list all known security risks that might be exacerbated when AI is used to process personal data. The impact of AI on security will depend on the way the technology is built and deployed, the complexity of the organisation, and the strength and maturity of the existing risk management capabilities,” he said.

How can organisations mitigate such security risks?

In an advise to organisations that use AI technology designed, built, and run by third-party firms, Binns said that organisations need to assess the security of any externally maintained code and frameworks by introducing coding standards and source code review. Organisations developing machine learning systems can also mitigate security risks by separating the development environment from the rest of their IT infrastructure.

Referring to a study that found the most popular ML development frameworks include up to 887,000 lines of code and rely on 137 external dependencies, Binns said that organisations can use ‘virtual machines’ or ‘containers’ that can isolate programmes from the rest of the IT system. They can also train an ML model using one programming language (eg Python) and before deployment, convert the model into another language (eg Java) that makes making insecure coding less likely.

ALSO READ: AI & machine learning are the future of cybersecurity but don’t expect the Terminator yet





Source link