Altcoins Talks - Cryptocurrency Forum

Further Discussions => General Discussion => Topic started by: Matthead012 on October 24, 2023, 09:17:53 PM

Title: Enterprises are struggling to address the security concerns of generative AI
Post by: Matthead012 on October 24, 2023, 09:17:53 PM
Extra Hop, a cloud-native network detection and response startup, revealed a troubling trend in a recent study: enterprises are grappling with the security consequences of employee generative AI use.
The Generative AI Tipping Point, their new research report, focuses light on the problems that organizations confront as generative AI technology becomes more prevalent in the workplace.

The paper dives into how businesses are coping with the usage of generative AI tools, indicating significant cognitive dissonance among IT and security leaders. Surprisingly, 73 percent of these leaders admitted that three employees utilize generative AI tools or large language models (LLM) at work on a regular basis. Despite this, an overwhelming majority claimed to being unsure how to successfully address the associated security concerns.
When asked about their worries, IT focuses on the related security risks (the potential probability of incorrect or illogical responses) rather than serious security issues such as the exposure of customer and employee personally identifiable information (PII) (36%), or financial loss (25%).

Extra Hop Co-Founder and Chief Scientist Raja Mukherjee stated, "By blending innovation with strong safeguards, generative AI will continue to be a force that will uplevel entire industries in the years to come."
One of the study's unexpected findings was the ineffectiveness of generative AI restrictions. Approximately 32% of respondents said their organizations had forbidden the use of these tools. However, only 5% of employees reported never using these technologies, demonstrating that bans alone are insufficient to reduce their use.

The study also revealed a strong demand for advice, particularly from government agencies. A sizable 90 percent of respondents expressed a desire for government participation, with 60 percent favoring mandatory rules and 30 percent preferring government norms that enterprises might accept voluntarily.
Despite their confidence in their current security architecture, the analysis identified shortcomings in fundamental security processes.

While 82% were confidence in their security stack's capacity to protect against generative AI risks, fewer than half had invested in monitoring technology. Surprisingly, just 46% had written laws governing permissible use, and only 42% had training facilities for the safe use of these instruments.

The findings are the result of the fast use of technologies such as ChatGPT, which have become an essential component of modern organizations. Business leaders are urged to understand their employees' use of generative AI in order to uncover potential security flaws.