A joint artificial intelligence security white paper by stc Group and Huawei sheds light on the challenges enterprises face in securing emerging technology. Research shows that the global AI market is growing at 36.1 percent annually and will be worth $641 billion by 2028.
The white paper was released at the recent Mobile 360 Riyadh by GSMA, an event that brought together policymakers and regulators along with leaders from the region’s ICT sector.
stc and Huawei are dedicated to AI security knowledge-sharing to foster a secure AI application environment and contribute to an AI-enabled intelligent world. To tackle the new AI security challenges, the white paper proposes to introduce security assessment on AI models as early as in the design and development stage. This should have further security monitoring and auditing measures to provide run-time security while operating AI systems.
Yasser Al-Swailem, vice president of cybersecurity at stc Group, said: “Collaboration between stc Group and Huawei demonstrates how we leverage industry partnerships to build a secure and integrated response to emerging security challenges. We firmly believe that securing AI platforms is an ecosystem-based effort with multiple stakeholders working together to strengthen our AI platform and prevent the prevalent attacks that grow in sophistication by each passing day.”
Ibrahim Alshmarani, chief security and privacy officer at Huawei Saudi, said: “There is no doubt AI will impact our society in unprecedented ways. However, cybersecurity poses a real challenge to AI platforms and could stall much-needed progress. As we build these systems, we must ensure that security is built from the ground up, which is the most effective method to plug gaps that cybercriminals can exploit. We are delighted to work with partners such as the stc Group to help secure the region’s digital transformation.”
With the accumulation of big data, dramatic improvements in computing power, and continuous innovation in machine learning, AI technologies such as image recognition, voice recognition, and natural language processing have become ubiquitous. However, AI poses a significant risk to computer security. On the one hand, AI can be used to build defensive systems such as malware and network attack detection, and on the other hand, it can be exploited to launch more effective attacks. Thus, building robust AI systems immune to external threats is crucial.