🍁 This season, we’re taking a moment to pause and appreciate the teammates, families, customers, and partners that make every day meaningful. Swipe ➡️ to see what a few members of the WitnessAI team are thankful for - in and outside of the office. Wishing you a wonderful Thanksgiving from all of us at WitnessAI! 🦃🧡 #WitnessAI #Thanksgiving #AI #AISecurity
WitnessAI
Computer and Network Security
Mountain View, CA 4,612 followers
Enable Enterprise AI, Safely.
About us
WitnessAI enables safe and effective adoption of enterprise AI, through security and governance guardrails for public and private LLMs. The WitnessAI Secure AI Enablement Platform provides visibility of employee AI use, control of that use via AI-oriented policy, and protection of that use via data and topic security. Learn more at https://witness.ai. #AIGovernance #EnterpriseAI #SecureAI #GenerativeAI #AICompliance #DataPrivacy
- Website
-
https://witness.ai
External link for WitnessAI
- Industry
- Computer and Network Security
- Company size
- 51-200 employees
- Headquarters
- Mountain View, CA
- Type
- Privately Held
- Founded
- 2023
Locations
-
Primary
Get directions
Mountain View, CA 94040, US
Employees at WitnessAI
Updates
-
How can organizations secure their AI agents and models from reasoning-leakage jailbreaks? Amr Ali, ML Researcher at WitnessAI, break down essential protection mechanisms that help prevent adversarial threats and ensure safe AI deployment. Witness the full demo of the our Model Protection Guardrail here: https://lnkd.in/g2BaYThn #AI #ModelProtection #CyberSecurity #CISO
-
Enterprises are turning on AI for employees, applications, models, and now autonomous agents at incredible speed. The GTG-1002 campaign disclosed by Anthropic shows how quickly these systems are becoming part of core attack infrastructure, and how hard it is to secure them with fragmented tools and policies. This is an ecosystem problem, not a single-vendor problem. In a new blog, our Head of Product Marketing, Sharat Ganesh, looks at what this incident tells us about the future of enterprise AI security and governance, including: - Why agents should have their own scoped identities instead of inheriting developer access - How to protect the communication fabric (including MCP) that links agents to tools and data - Why “cognitive observability” is needed to understand agent intent, not only log events - How a unified control plane can enforce one set of policies across employees, apps, models, and agents Read the full post here: https://lnkd.in/g4BnRZCn #EnterpriseAI #AIsecurity #GenAI #AutonomousAgents #Cybersecurity
-
What is reasoning leakage in AI? WitnessAI's ML reasearcher, Amr Ali dives into this crucial topic, discussing how it can expose vulnerabilities in AI models. Learn why understanding this concept is vital for the future of AI security. Witness the full demo of a reasoning leakage attack here: https://lnkd.in/g2BaYThn #Cybersecurity #AI #Innovation
-
Next stop: Half Moon Bay ✈️ We’re heading to the west coast on December 3rd for the GBI Impact Annual CISO Summit! If you’re attending, stop by and chat with our onsite team to see how we enable the safe and effective adoption of enterprise AI. While you're there, you won't want to miss our roundtable session, “Agentic AI in the Enterprise: Visibility, Security, and Governance Challenges,” moderated by our Head of Product Marketing, Sharat Ganesh. Haven't registered yet? There's still time to save your spot and join us: https://lnkd.in/erR2KTQb See you in Half Moon Bay! #WitnessAI #AI #Cybersecurity #EnterpriseAI #AISecurity #AgenticAI
-
-
💬 We go live in a few hours—Nov 19 at 10:00am PT! Learn how reasoning-leakage attacks unfold—and how Model Protection Guardrails detect and block attacks in real time securing everything from foundational models to autonomous agents. Including: - How to think through the agentic threat landscape - How reasoning transparency creates “self-betrayal” vulnerabilities - Why unified runtime protection is essential for modern AI systems - How to integrate Model Protection Guardrails into your AI stack to stop jailbreaks 🎙️ Speakers: Amr Ali (Head of ML) & Sharat Ganesh (Head of Product Marketing), WitnessAI 👉 Register: https://lnkd.in/g2BaYThn #AIsecurity #LLMSecurity #GenAI #Cybersecurity #CISO #MLOps #AISafety #ModelProtection #Jailbreaks #ThreatResearch
-
WitnessAI reposted this
New research shows attackers can use model reasoning traces to jailbreak AI systems in under five attempts. If you think of alignment as a loss function, then reasoning leakage gives attackers an approximate gradient. In our analysis across multiple models, we saw the same structure: - The model refuses → explains exactly which constraint it’s enforcing. - That explanation becomes a directional signal: “move away from this wording, keep that intent.” - Over a handful of turns, the attacker performs semantic gradient descent on the model's safety boundary. This is prompt injection 2.0: - Instead of guessing magical jailbreak phrases, attackers query the model for its own safety logic and then inject around those constraints using its prior reasoning as context. Worse, agentic systems make this even easier. Once the model’s reasoning is fed back into tools, memory, or other agents, the attack surface stops being “one prompt” and becomes a self-reinforcing feedback loop. Join me and Sharat Ganesh, Head of Product Marketing at WitnessAI, on Nov 19 to see the demo and learn how to defend reasoning-enabled systems before attackers turn transparency into an exploit surface. 🗓️ Wednesday, Nov 19 | 10:00 AM PT 👉 Register: [Link in Comments] #AISecurity #ModelProtection #LLMJailbreak #Openai #Anthropic #llms #Llama #Phi #Gemma #Qwen #PromptInjection #CyberSecurity #ResponsibleAI #AIResearch
-
-
WitnessAI reposted this
One of the unique features of WitnessAI's architecture is that it easily supports data sovereignty for AI activity data, across first- and third-party models. Does your AI security platform do this? Ours does. https://lnkd.in/gCYBmqnM
-
Today, we honor the courage and sacrifice of the brave men and women who have served our nation. Your dedication reminds us what it means to protect, defend, and lead with integrity in everything we do. To all veterans and those currently serving, we thank you for your service. 🇺🇸 #VeteransDay #Courage #HonoringService #WitnessAI #EnterpriseAI #Cybersecurity
-
-
Grab your spot! https://lnkd.in/gzqRGpJ9
AI & Cybersecurity Product Marketing Leader | Advisor @National Centre of Excellence-Cybersecurity, Government of India | Investor | Securing Enterprise AI (LLMs, Agents, RAG)
When AI starts explaining how it thinks, attackers start taking notes. After the release of K2-Think, red-team researchers found that “transparent reasoning” features—meant to build trust—can also expose the exact decision paths models use to refuse unsafe requests. Attackers simply probe, read the model’s reasoning, and refine their prompts until they slip past the guardrails. Every time a model explains why it refused a prompt, it gives attackers another clue. Each rejection becomes a breadcrumb — and within a few turns, those crumbs form a clear path around your defenses. This kind of reasoning-leakage jailbreak isn’t isolated. The same exploit has surfaced in multiple frontier models, including GPT-5. It’s becoming one of the fastest-spreading AI vulnerabilities in the wild. Join me and Amr Ali, Head of ML at WitnessAI, as we unpack real examples and show how unified runtime protection closes the loop across chatbots, models, and agents—stopping attacks at every phase. 🗓️ Wednesday, Nov 19 | 10:00 AM PT 👉 Register: [Link in Comments] #AIsecurity #ReasoningLeakage #ModelProtection #security #cybersecurity #infosec #dfir #ai #openai #anthropic #llama #k2think Stephanie Gilliam Dan Graves Gil Spencer Rick Caccia Faisal Rahman Alex Waterman
-