- News & Resources: Listings >
- Blog
- Microsegmentation: Protecting Data from Cyber Threats
- Retail shoplifting and loss prevention: How to protect your business
- Generative AI Cost Optimization Strategies
- Why Do I Need to Protect My Cloud?
- 10 Reasons for Engaging Outside Experts to Manage Your Cybersecurity
- Why Hiring a 3rd Party MSP Expert Makes Sense and – and Cents (MANY cents!)
- Brand and Network Considerations When Adopting AI Corporately
- Integrating XDR, SIEM, and SOAR
- 3-2-1 –Go? Not so quick, this time.
- 5 Things a CISO Shoud Know
- 10-Step Patch Management Checklist
- Penetration Testing vs. Breach Attack Simulation
- Current big cyber breaches and impact on businesses
- Smart Infrastructure Gets Lit Up!
- Securing Industrial IoT: The Missing Puzzle Piece
- 7 Common Cybersecurity Mistakes Made by SMBs
- The Future of Physical Security: Cloud-Based Systems
- Autonomous and Sensor Technology Use Surging
- 2024 Facilities Trends Will Require Facilities and IT Teams to Work in Tandem
- NGFW vs. WAF. What’s the Right Firewall for You?
- Chris Hadfield’s Words To Live By
- Industrial Revolution 4.0 + IIoT
- Digital Fluency Drives Innovation
- Your Cloud Needs Protecting, Too
- Your building alarm systems could become obsolete. In 2024!
- Zero Trust 2.0: Zero Trust Data Resilience (ZTDR)
- We just got, or got used to, Wi-Fi 6. What is Wi-Fi 7?
- What Does the Board Need to Know? Business Metrics that CISOs Should Share – 4th and Last in a Four-Part Series
- Why 2024 is the Year for AI Networking
- International Women’s Day is Tomorrow – Great Time to Think About…
- Data-Centric Security Step One: Classifying Your Data
- The Network – Unsung Hero of Super Bowl LVIII
- What Does the Board Need to Know? Business Metrics that CISOs Should Share – Third in a Four-Part Series
- Boosting IT Team Performance by Fostering Intuition, Curiosity and Creativity
- Breach Remediation Costs Can Wipeout Bottom Line and Business
- Hoodied Hackers Now Favour Hugo Boss
- What Do You Need to Tell the Board? Business Metrics that CISOs Should Share – Second in a Four-Part Series
- How to Get People to Re-Engage After the Holidays
- What Does the Board Need to Know? Business Metrics that CISOs Should Share – First in a Four-Part Series
- Android Devices MUST be Updated + IT Departments Being Cut as Privilege Escalation Escalates
- Today’s Common Cloud Migration and Management Concerns
- Protect Your Healthcare Network from Cyberattack – Lives are at Stake
- Happy Halloween: Black Cats Lead to Boo….Hoo.
- Insurance Underwriters are Protecting Their Flanks
- Insurance Companies Cracking Down as Cybercriminals Become Better Business Builders
- Scary Cyberattacks Stats
- Parents, Profs and IT Professionals Perceive Back-to-School Through Different Lens
- Zscaler’s new IDTR and other tools that leverage generative AI
- Vanquish Vaping, Vandalism and Villainy
- Fabric for Fast-Paced Environments
- Changes to Cyber Insurance Requirements – What you Need to Know
- Cybersecurity Readiness – Newly Released Report
- Passwords Leaked…Again
- 10-Step Patch Management Checklist
- Remote – Again – For Now… and Still Maintaining Engagement
- Protecting Pocketbooks, Passwords and Property from Pilfering
- Raspberry Robin: Highly Evasive Worm Spreads over External Disks
- Cisco Introduces Responsible AI – Enhancing Technology, Transparency and Customer Trust
- Managing Customer Trust in Uncertain Supply Chain Conditions
- Hope on the Horizon
- Toys of Tomorrow… What will spark your imagination? Fuel your imagination?
- Protecting Purses and Digital Wallets
- The Password that Felled the Kingdom + MFA vs 2FA
- The MOE’s RA 3.0 and Zscaler
- 7 Critical Reasons for MS Office 365 Backup
- Penetration Testing Important, but…
- Social Engineering and Poor Patching Responsible for Over 90% of Cybersecurity Problems
- Breach Incidence and Costs On the Rise Again + 5 Ways to Reduce Your Risk
- Cybersecurity Insurance Policies Require Security Audits and Pen Testing
- Wireless strategies for business continuity gain importance as enterprise expand IoT, cloud, and other technologies
- How Cybercrooks are Targeting YOU
- Enabling Digital Transformation with Cisco SD-WAN
- WFH Post Pandemic – What It Will Look Like. What You’ll Need.
- Leaders to looking to the IoT to improve efficiency and resiliency
- Cyber Security Vernacular – Well, some of it, for now
- Why You Need Disaster Recovery, NOT Just Back-Ups
- 10 Reasons Why Having an Expert Manage Your Cybersecurity Makes Sense and Saves Dollars
- Converting CapEx IT Investments into Manageable OpEx
- The Hybrid Workplace – Planning the Next Phase
- Cisco Cloud Calling: Empowering Customers to Thrive with Hybrid Work
- When You Can’t Access the Cloud
- How to Keep On Keeping On
- New Cisco Research Reveals Collaboration, Cloud and Security are IT’s Top Challenges
- Threats from Within on the Rise
- Cloud Covered? If Not, Take Cover!
- Zero Trust and Forrester Wave Report
- Password Based Cyber Attack: Like Leaving Keys Under Doormats
- So, What’s Up With Sensors?
- Sensors and Systems Create a Digital “Last Mile” and Help Skyrocketing Costs
- Scanners Provide Peace of Mind for Returning Students and Workers
- Sensors Improve Operations and Bottom Line… Easily and Cost-Affordably.
- Cisco Meraki Looks at 2021
- 2020 Holiday Shopping: Cybersecurity and Other Tips to Safeguard Wallets and Systems
- How to make the most of the technology you have
- Personnel, Planet and Business Progress: More Interdependent Than Ever Before
- Sure… you can get them all in the boat – but can you get them to work well together?
- Pushing the Zero Trust Envelope – Cisco is Named a Leader in the 2020 Forrester Zero Trust Wave
- Cloud Data Must be Protected, Too!
- Don’t Let Anyone Get the Dirt on You – Make It Instead!
- How IoT Devices Can Help You and Your business
- WebEx – A World of Possibility
- Creating Your Breach Response Plan Now Will Save You Thousands Down The Road
- Been hacked? Here’s what you must do next.
- The Need for Pen Testing is At an All-Time High
- 5 Ways an IT Reseller Improves Your Performance and Peace-of-Mind
- 5G and Wi-Fi 6: Faster, more flexible, and future ready. Are you?
- Network and Data Security for Returning and Remote Workers + Disaster Recovery Symposium
- Collaboration and Cisco WebEx: Protecting Your Data
- Thursday’s Virtual Conference Tackles Today’s Supply Chain Trials and Tribulations
- 10 Tips to Reduce Cloud Storage Risk
- COVID-19 Crisis Fuelling IT Spending
- Supply Chain/Logistics Experts Share Their Expertise
- Cisco Breach Defence Overview
- Announcing Our New Website and Blog
It’s hard to imagine that the use of Copilot, ChatGPT and other AI tools has become commonplace in only a year, yet it has.
As an employer, you can rejoice in the shortened time requirements for writing code, correspondence and plan outlines, as well as the ease of automating routine tasks, but there are also privacy and security concerns to consider.
The story about the company’s whose senior executives’ employment contracts, complete with salaries and benefits, became available to the rest of the company through an internal ChatGPT query went viral – and likely gave many Boards of Directors heart palpitations, wondering what propriety data could end up in the wrong hands. Given the degree of truly sensitive data that has been made public in the past year, they were right to worry. So should we all.
In addition to revisiting, and re-writing protocols to protect internal privacy, there are ethical matters to address.
For instance, what if employees are using an AI tool /chatbot to create work for which a client is being billed? If the client thinks the work was produced by internal experts, yet that’s not the case – and this is not disclosed – irreparable harm could befall the relationship.
What if employees are using it to do their work, so they can have a second, or third, remote job on the side – and the employer doesn’t know? Again, erosion of trust may be the least of the concerns.
What Should You Do to Protect Corporate Reputation and Morale?
-
Determine if and/or how you want employees to use AI.
-
Detail what uses of an AI tool are acceptable – and all instance in which that is not the case.
-
If you decide to allow specific types of employees to use AI for specific tasks, they should be required to clearly identify the portions of the work that was machine generated vs. what can be considered proprietary, or even corporate intellectual property.
-
Ensure that AI-generated work be peer-reviewed, as these tools are not 100% accurate, and often superficial work – and work that may not reflect the most recent trends and/or technical advances.
-
As mentioned, re-write your access protocols (think “least privilege) to prohibit/prevent the uploading of private and/or proprietary data.
-
Specify the consequences for violating corporate policies and prescribed practices related to AI use.
-
Assess your network security posture with an AI lens – and make necessary changes.
-
Continue to review and update your AI implementation approach and policies.
Other AI Adoption Concerns, Including Security
Anand Oswal is the Senior Vice President and General Manager of Network Security at cyber security leader Palo Alto Networks. On August 2, 2024, he wrote a blog post entitled, “Promise and Perils of Building AI Into Your Business Applications”, that also appeared in Forbes:
“Artificial intelligence (AI) is at the forefront of business innovation. But although AI feels like a relatively new concept, 83% of technology service providers already use generative AI in their businesses.
Business use of AI apps spans nearly every type of application, including supply chain optimization, process automation, customer service chatbots, virtual assistants, data analysis, logistics monitoring, fraud detection, competitive intelligence and more. But there are risks involved with this new technology. Take, for example:
-
Airlines, hotels and online travel businesses are building LLM-powered virtual assistants to let you self-manage your bookings. But what if the organization rushed that application to market without considering supply chain vulnerabilities in the app ecosystem – including corrupt AI and machine learning (ML) packages and model vulnerabilities?
-
Pharmaceutical enterprises are trying to use their past research, trials and outcomes to train models, thereby accelerating their ability to take their next drug to the market. But what if the organization leverages an open-source model that was trained on poisoned data, leading to incorrect or misleading trial results?
-
Real estate companies are building online apps to help you find your next property and build the most appropriate offer based on the market data. But what if the application was subject to prompt injection attacks that let bad actors arbitrage the market at the expense of everyday home buyers?
No matter where you may sit on the AI adoption spectrum, it’s clear that the businesses that are embracing AI are winning a competitive edge. But it’s not as easy as plugging an AI model into your existing infrastructure stack and calling it a win. You’re adding a whole new AI stack, including the model, supply chain, plug-ins and agents – and then giving it access to sensitive internal data for both training and inference. This brings a whole new set of complexities to the security game.
So, how does a business harness the potential of AI without compromising security?
-
The journey to securing AI-powered applications starts with discovery. You must be able to see every component of your AI app ecosystem – including AI apps, models, inference and training datasets, and plug-ins.
-
Next, you must understand your security posture to identify and remediate against possible risks in the supply chain and the configuration, as well as data exposure risks to your AI apps. By identifying your highest-risk applications, you can investigate your training dataset risks and potential level of risk to your organization.
-
Then, you must protect against runtime risks. These are the risks your app is exposed to once it’s deployed and exposed to the outside world. Attackers are aware of the speed at which new AI applications are being developed and rushed to market, and they’ve devised an increasing arsenal of AI-specific attacks in the hopes of exploiting new, untested components and weaknesses in the overall security posture of these applications. Enveloping your AI application components with runtime protection mechanisms helps you shield your model against misuse—like prompt injection techniques to leak your customer data or attackers using your models to generate malware.
The promises of AI can’t be overstated. But the risks must be acknowledged with the same fervor to see it live up to its full potential. A comprehensive security solution will help you confidently build AI-powered apps by securing your journey to AI, from design to build to run.”
To learn how you can best protect your cloud, edge, network, endpoints and people in our brave new AI world, please contact us at [email protected], or 1.877.238.9944.