follow us on twitter . like us on facebook . follow us on instagram . subscribe to our youtube channel . announcements on telegram channel . ask urgent question ONLY . Subscribe to our reddit . Altcoins Talks Shop Shop


This is an Ad. Advertised sites are not endorsement by our Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise Here Ads bidding Bidding Open

Author Topic: Enterprises are struggling to address the security concerns of generative AI  (Read 2615 times)

Offline Matthead012

  • 1st Step
  • *
  • Activity: 1
  • points:
    1937
  • Karma: 5
  • Trade Count: (0)
  • Referrals: 0
  • Last Active: November 07, 2023, 07:58:44 PM
    • View Profile

  • Total Badges: 4
    Badges: (View All)
    Search Karma Topic Starter
Extra Hop, a cloud-native network detection and response startup, revealed a troubling trend in a recent study: enterprises are grappling with the security consequences of employee generative AI use.
The Generative AI Tipping Point, their new research report, focuses light on the problems that organizations confront as generative AI technology becomes more prevalent in the workplace.

The paper dives into how businesses are coping with the usage of generative AI tools, indicating significant cognitive dissonance among IT and security leaders. Surprisingly, 73 percent of these leaders admitted that three employees utilize generative AI tools or large language models (LLM) at work on a regular basis. Despite this, an overwhelming majority claimed to being unsure how to successfully address the associated security concerns.
When asked about their worries, IT focuses on the related security risks (the potential probability of incorrect or illogical responses) rather than serious security issues such as the exposure of customer and employee personally identifiable information (PII) (36%), or financial loss (25%).

Extra Hop Co-Founder and Chief Scientist Raja Mukherjee stated, "By blending innovation with strong safeguards, generative AI will continue to be a force that will uplevel entire industries in the years to come."
One of the study's unexpected findings was the ineffectiveness of generative AI restrictions. Approximately 32% of respondents said their organizations had forbidden the use of these tools. However, only 5% of employees reported never using these technologies, demonstrating that bans alone are insufficient to reduce their use.

The study also revealed a strong demand for advice, particularly from government agencies. A sizable 90 percent of respondents expressed a desire for government participation, with 60 percent favoring mandatory rules and 30 percent preferring government norms that enterprises might accept voluntarily.
Despite their confidence in their current security architecture, the analysis identified shortcomings in fundamental security processes.

While 82% were confidence in their security stack's capacity to protect against generative AI risks, fewer than half had invested in monitoring technology. Surprisingly, just 46% had written laws governing permissible use, and only 42% had training facilities for the safe use of these instruments.

The findings are the result of the fast use of technologies such as ChatGPT, which have become an essential component of modern organizations. Business leaders are urged to understand their employees' use of generative AI in order to uncover potential security flaws.

Altcoins Talks - Cryptocurrency Forum


This is an Ad. Advertised sites are not endorsement by our Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise Here Ads bidding Bidding Open


 

ETH & ERC20 Tokens Donations: 0x2143F7146F0AadC0F9d85ea98F23273Da0e002Ab
BNB & BEP20 Tokens Donations: 0xcbDAB774B5659cB905d4db5487F9e2057b96147F
BTC Donations: bc1qjf99wr3dz9jn9fr43q28x0r50zeyxewcq8swng
BTC Tips for Moderators: 1Pz1S3d4Aiq7QE4m3MmuoUPEvKaAYbZRoG
Powered by SMFPacks Social Login Mod