
When Google unveiled Gemini 3 pro on 18 November 2025, most headlines focused on its leaps in reasoning, multimodality, and performance. But executives should look past the model-to-model comparisons. Gemini 3’s real significance is structural.
Part of a Broader Trend — Accelerated by Gemini 3
AI assistants have already begun moving inside enterprise workflows.
Microsoft Copilot, for example, has embedded OpenAI models deeply into the Microsoft ecosystem for some time, enabling users to query documents from office.com, summarize inboxes, act on Teams content, and automate tasks across Office 365.
Gemini 3 continues — and accelerates — this trend in Google’s environment.
It comes natively with a broad set of Workspace integrations and early agentic capabilities, pushing Google toward a more unified enterprise AI mesh, where AI mediates interactions across email, documents, storage, collaboration tools, and automation.
This evolution means:
AI is no longer something the business uses.
AI is becoming something the business runs on.
And that fundamentally changes the security paradigm.
AI Has Moved Into the Operational Backbone
What makes Gemini 3 transformative is not the headline capabilities, but its ability to operate inside productivity ecosystems with unprecedented context and agency.
The model can now:
- rewrite and route documents
- summarize and act on long email threads
- call APIs
- trigger workflow automations
These capabilities mirror — and in some cases expand on — what Copilot already offers.
But Gemini 3 ships these integrations natively inside Google Workspace, giving the model operational reach that traditional LLM deployments did not have.
As AI becomes part of the execution layer, the attack surface expands — and much of this expansion is invisible to existing security controls.
Indirect Prompt Injection: The Fastest-Growing Enterprise Threat
Lakera, a Check Point company, has shown that indirect prompt injection allows attackers to target the data an AI ingests — not the prompts users type. Gemini 3 multiplies this risk by pulling information from across the enterprise:
- documents
- emails
- links
- PDFs
- shared content
A single poisoned webpage, signature block, or embedded PDF element can silently redirect model behavior. Traditional security tools cannot detect this new class of manipulation.
Multimodality Expands the Attack Surface
Gemini 3’s multimodal capabilities bring enormous productivity upside — and new forms of risk.
Lakera has demonstrated practical multimodal attacks, including audio-based jailbreaks where transcripts appear clean even as the model is manipulated. With Gemini 3, executives must now account for:
- adversarial audio
- malicious images
- embedded or manipulated media
- deceptive screenshots
Each introduces vectors that are not covered by today’s email, endpoint, or content security controls.
Agentic AI Brings Operational Risk Into the Foreground
Gemini 3 also introduces early agentic behaviours — the ability to take actions, not just provide answers.
Its tool-calling, automation, and API-level operations give AI real authority inside enterprise systems.
Microsoft Copilot already performs similar tasks via Skills and connectors, but Gemini 3’s approach is more tightly tied to native Workspace surfaces.
Lakera’s analysisof the Model Context Protocol (MCP) shows how quickly such agent systems become risky when:
- permissions are overly broad
- scopes are unclear
- actions are not monitored
- outputs are not validated
A misconfigured agent can escalate privileges, trigger unintended actions, or interact unpredictably with critical systems.
This is no longer hypothetical.
This is operational risk.
Most Enterprises Are Not Ready
Lakera’s GenAI Security Readiness Report shows organizations adopting AI far faster than they are securing it. Most still lack:
- AI governance
- guardrails
- agent monitoring
- multimodal protections
- adversarial testing pipelines
Gemini 3 widens this gap. The model’s power accelerates value creation — but it also accelerates exposure.
Gemini 3 Pro: Strong Foundations, Not a Security Strategy
Early internal results from Lakera’s b³ security evaluation, which measures how easily models can be manipulated into leaking content or bypassing safeguards, offer an important nuance for executives.
The preview model gemini-3-pro-preview ranks among the strongest systems we’ve tested — slightly ahead of Anthropic’s Claude 4.5 Haiku — particularly in direct content-extraction and instruction-override scenarios.
We see the greatest gains when Gemini is explicitly instructed to prioritize safety and when responses are routed through an additional “self-judge” layer. This hardened configuration delivers noticeably stronger protection on the most challenging tasks.
But it comes with a trade-off: that added robustness requires significantly more internal reasoning, making Gemini 3 Pro far more computationally expensive, while models like Claude 4.5 Haiku deliver strong security at lower cost.
his reinforces a critical lesson:
The model is not the security strategy.
Configuration, prompting, and layered guardrails matter as much as the base model itself.
The New Executive Imperative
Gemini 3’s real transformation is not what the model knows — it’s what the model can access.
AI now touches documents, inboxes, APIs, workflows, and systems across the enterprise environment.
The executive question is no longer: “How intelligent is the model?”
It is now: “What is the model allowed to do — and who ensures it behaves safely when it does it?”
This is the new enterprise perimeter. Securing it is now a board-level responsibility — and it will define the next decade of cyber security strategy.



