AI-powered pentesting tools are evolving fast — but most struggle with validation and false-positive control.
I came across a platform that addresses this with proof-of-execution scoring and per-scan isolation.
NeuroSploit v3 is an open-source attempt to make AI pentest look more like the work of a human team, not a noisy scanner.
The core idea is simple.
Instead of just "guessing" based on an LLM prompt, it spins up isolated Kali Linux containers and uses negative controls and proof-of-execution checks to validate findings before they ever reach the report.
NeuroSploit focuses on three main areas:
1) Coverage and context
→ 100 vulnerability types in 10 categories
→ 3 streams in parallel: recon, junior tester, tool runner
→ Built-in integration with tools you already know (nmap, nuclei, sqlmap, ffuf, etc.)
2) Isolation and control
→ Every scan runs inside its own Kali Linux Docker container
→ Per-scan tools install, hard CPU/RAM limits, auto cleanup
→ Container pool with TTL and orphan cleanup for stable operations
3) Validation and proof-of-execution
→ Negative controls: send benign “safe” requests to cut false signals
→ 25+ proof methods per vuln type (XSS context, SSRF markers, DB error patterns, etc.)
→ Confidence scoring 0–100 with a final “validation judge” that approves or rejects a finding
On top of that, it can talk to several LLM providers (Claude, GPT, Gemini, local LLMs) and adapt mid-scan when endpoints die, WAF blocks, or returns start to show diminishing value.
Is it perfect? No.
Is it closer to how I want AI to work in offensive security? For me, yes.
Would you find it useful if I tried NeuroSploit v3 and shared my honest take on it?
Stay secure😑
___
Enjoy this? 🔄 Repost it to your network and follow @securediary for more.
Join me on LinkedIn.
#Cybersecurity #InfoSec #ThreatIntel
I came across a platform that addresses this with proof-of-execution scoring and per-scan isolation.
NeuroSploit v3 is an open-source attempt to make AI pentest look more like the work of a human team, not a noisy scanner.
The core idea is simple.
Instead of just "guessing" based on an LLM prompt, it spins up isolated Kali Linux containers and uses negative controls and proof-of-execution checks to validate findings before they ever reach the report.
NeuroSploit focuses on three main areas:
1) Coverage and context
→ 100 vulnerability types in 10 categories
→ 3 streams in parallel: recon, junior tester, tool runner
→ Built-in integration with tools you already know (nmap, nuclei, sqlmap, ffuf, etc.)
2) Isolation and control
→ Every scan runs inside its own Kali Linux Docker container
→ Per-scan tools install, hard CPU/RAM limits, auto cleanup
→ Container pool with TTL and orphan cleanup for stable operations
3) Validation and proof-of-execution
→ Negative controls: send benign “safe” requests to cut false signals
→ 25+ proof methods per vuln type (XSS context, SSRF markers, DB error patterns, etc.)
→ Confidence scoring 0–100 with a final “validation judge” that approves or rejects a finding
On top of that, it can talk to several LLM providers (Claude, GPT, Gemini, local LLMs) and adapt mid-scan when endpoints die, WAF blocks, or returns start to show diminishing value.
Is it perfect? No.
Is it closer to how I want AI to work in offensive security? For me, yes.
Would you find it useful if I tried NeuroSploit v3 and shared my honest take on it?
Stay secure
___
Enjoy this? 🔄 Repost it to your network and follow @securediary for more.
Join me on LinkedIn.
#Cybersecurity #InfoSec #ThreatIntel
Please open Telegram to view this post
VIEW IN TELEGRAM
1👍8
Security teams are entering a new phase.
AI is finding vulnerabilities faster.
Attackers are exploiting faster.
And traditional patch cycles are starting to look slow by comparison.
🔥 This week's Top News:
→ Microsoft patch six actively exploited zero-days (CVE-2026-21510 through -21525)
→ Google fix Chrome zero-day CVE-2026-2441 under active attack
→ Research showing Claude Opus 4.6 identified 500+ memory corruption vulnerabilities in open-source projects
→ Threat actors are already targeting infrastructure around the Milano Cortina 2026 Winter Games
What matters now isn’t just scanning, but building a robust response architecture.
When a new exploited vulnerability emerges, I always look for three core areas:
1️⃣ Exposure mapping
Do we know which systems are externally reachable or user-triggerable?
Can we prioritize based on potential impact, rather than relying solely on CVSS?
2️⃣ Remediation verification
Can we confirm remediation on the systems that matter most — not just report rollout percentage?
3️⃣ Mitigation
If patching is delayed, are compensating controls in place (isolation, policy tightening, monitoring)?
The velocity of security has changed.
The question isn’t whether AI will reshape vulnerability management.
It already is.
AI is already a tool for both attackers and defenders. Those who adapt quickly will come on top.
A question to you:
How are you adjusting your patching or AppSec workflows to account for faster discovery cycles?
Do you use AI?
Look for CVE Alert in the first comment. 👇
Stay secure😑
___
Enjoy this? 🔄 Repost it to your network and follow @securediary for more.
Join me on LinkedIn.
#CyberSecurity #Infosec #ThreatIntel
AI is finding vulnerabilities faster.
Attackers are exploiting faster.
And traditional patch cycles are starting to look slow by comparison.
→ Microsoft patch six actively exploited zero-days (CVE-2026-21510 through -21525)
→ Google fix Chrome zero-day CVE-2026-2441 under active attack
→ Research showing Claude Opus 4.6 identified 500+ memory corruption vulnerabilities in open-source projects
→ Threat actors are already targeting infrastructure around the Milano Cortina 2026 Winter Games
What matters now isn’t just scanning, but building a robust response architecture.
When a new exploited vulnerability emerges, I always look for three core areas:
1️⃣ Exposure mapping
Do we know which systems are externally reachable or user-triggerable?
Can we prioritize based on potential impact, rather than relying solely on CVSS?
2️⃣ Remediation verification
Can we confirm remediation on the systems that matter most — not just report rollout percentage?
3️⃣ Mitigation
If patching is delayed, are compensating controls in place (isolation, policy tightening, monitoring)?
The velocity of security has changed.
The question isn’t whether AI will reshape vulnerability management.
It already is.
AI is already a tool for both attackers and defenders. Those who adapt quickly will come on top.
A question to you:
How are you adjusting your patching or AppSec workflows to account for faster discovery cycles?
Do you use AI?
Look for CVE Alert in the first comment. 👇
Stay secure
___
Enjoy this? 🔄 Repost it to your network and follow @securediary for more.
Join me on LinkedIn.
#CyberSecurity #Infosec #ThreatIntel
Please open Telegram to view this post
VIEW IN TELEGRAM
1👍12
AI vs Humans Cyber Defenders.
AI agents will be tested February 19–20 at the Kyiv International Cyber Resilience Forum, live cyber defense scenarios alongside experienced security teams.
I am participating in the forum, and I’m genuinely curious how this plays out.
I’ve spent 11+ years working in cybersecurity - both in military and business - and truth is, real incidents almost never play out in a predictable way.
They are messy. Incomplete. Time-constrained.
AI can process data fast.
Humans operate under pressure with context, intuition, and experience.
The interesting question isn’t “who is smarter.”
It’s about whether autonomous agents can operate reliably and in real-time, under the same constraints as human teams.
ARIMLABS is running a public vote on the outcome (details in the comments).
Who would you bet on - AI or humans? Why?
@securediary
AI agents will be tested February 19–20 at the Kyiv International Cyber Resilience Forum, live cyber defense scenarios alongside experienced security teams.
I am participating in the forum, and I’m genuinely curious how this plays out.
I’ve spent 11+ years working in cybersecurity - both in military and business - and truth is, real incidents almost never play out in a predictable way.
They are messy. Incomplete. Time-constrained.
AI can process data fast.
Humans operate under pressure with context, intuition, and experience.
The interesting question isn’t “who is smarter.”
It’s about whether autonomous agents can operate reliably and in real-time, under the same constraints as human teams.
ARIMLABS is running a public vote on the outcome (details in the comments).
Who would you bet on - AI or humans? Why?
@securediary
1👍14
If your AI can write code… it should help secure it, too.
Anthropic just rolled out Claudе Code Security, a new feature designed to scan codebases for flaws and suggest patches.
AI is already great at parsing logs and highlighting anomalies. But stepping into the auditor's shoes to patch code? That requires deep context.
The true test isn't if Claude can find a flaw; it's whether it understands the messy reality of a production environment without hallucinating a "fix" that breaks the build.
Here's how to use Claude Code Security safely:
1️⃣ Extra pair of eyes
→ Run AI scans on every merge and pull request
→ Let it flag risky patterns
2️⃣ Human in control
→ Security engineer or senior Dev reviews each AI fix
→ No auto-merge from AI output
3️⃣ Tie into threat intel
→ Watch CISA Known Exploited Vulns
→ Confirm your codebase isn't using specific vulnerable functions of the CVEs
I extensively use AI for day-to-day work. For example, for threat intel summary, customer email draft, or compliance audit prep. It’s a fantastic junior analyst. But it is always an assistant, not the one signing off on the decisions.
Do you trust AI to patch your production code or not?🤔
For the #CyberMonday News and CVE alert, see the first comment. 👇
@securediary
Anthropic just rolled out Claudе Code Security, a new feature designed to scan codebases for flaws and suggest patches.
AI is already great at parsing logs and highlighting anomalies. But stepping into the auditor's shoes to patch code? That requires deep context.
The true test isn't if Claude can find a flaw; it's whether it understands the messy reality of a production environment without hallucinating a "fix" that breaks the build.
Here's how to use Claude Code Security safely:
1️⃣ Extra pair of eyes
→ Run AI scans on every merge and pull request
→ Let it flag risky patterns
2️⃣ Human in control
→ Security engineer or senior Dev reviews each AI fix
→ No auto-merge from AI output
3️⃣ Tie into threat intel
→ Watch CISA Known Exploited Vulns
→ Confirm your codebase isn't using specific vulnerable functions of the CVEs
I extensively use AI for day-to-day work. For example, for threat intel summary, customer email draft, or compliance audit prep. It’s a fantastic junior analyst. But it is always an assistant, not the one signing off on the decisions.
Do you trust AI to patch your production code or not?
For the #CyberMonday News and CVE alert, see the first comment. 👇
@securediary
Please open Telegram to view this post
VIEW IN TELEGRAM
👍9
Four years of full-scale war. 1,461 days of resilience.
When I served as a SOC Division Chief in the Armed Forces, we prepared for hybrid threats. But the reality of the last four years rewired everything I know about defense.
Living and working in Kyiv, I’ve seen the concept of "Business Continuity" transform from a compliance checkbox into a survival instinct. We don’t just test backups for auditors anymore. We build systems that must survive when the power grid is hit, when the data center runs on diesel, and when the team is coding from shelters.
The biggest lesson for the global cybersecurity community?
Fragility is a choice.
We learned that secure architecture isn't about building unbreachable walls. It's about how fast you can stand back up when the walls shake.
To my fellow Ukrainians: We stand. We build. We defend.
To the global community: Don't wait for a crisis to test if your BCP actually works.
The photo date Feb 25th, the second day of full-scale war. My wife and I are relocating to Tuskavets.
Thank you, Creatio and Katherine Kostereva, for making it possible.
Is your resilience tested?🤔
Ours is tested, every day.
When I served as a SOC Division Chief in the Armed Forces, we prepared for hybrid threats. But the reality of the last four years rewired everything I know about defense.
Living and working in Kyiv, I’ve seen the concept of "Business Continuity" transform from a compliance checkbox into a survival instinct. We don’t just test backups for auditors anymore. We build systems that must survive when the power grid is hit, when the data center runs on diesel, and when the team is coding from shelters.
The biggest lesson for the global cybersecurity community?
Fragility is a choice.
We learned that secure architecture isn't about building unbreachable walls. It's about how fast you can stand back up when the walls shake.
To my fellow Ukrainians: We stand. We build. We defend.
To the global community: Don't wait for a crisis to test if your BCP actually works.
The photo date Feb 25th, the second day of full-scale war. My wife and I are relocating to Tuskavets.
Thank you, Creatio and Katherine Kostereva, for making it possible.
Is your resilience tested?
Ours is tested, every day.
Please open Telegram to view this post
VIEW IN TELEGRAM
👍15
AI is coming everywhere, and Cybersecurity is not an exception.
Kyiv International Cyber Resilience Forum was a blast. I have not yet seen so many cyber people in one place. This is one of the biggest Cybersecurity events in Ukraine to date.
The amount and intensity of the networking was unbelievable. Since I came to the forum at 11:00 a.m. I could not attend any of the panels or stages for the whole 2-2.5 hours, purely because of the number of people I knew and wanted to talk to.
The discussions just kept going, and I loved it.
The networking was clearly the main feature of the event. The people from Ukraine's Gov Cyberdefence, Startups, European Gov representatives, and Global startups.
The event was a "Cybersecurity Networking Academy Award" winner.
👇 What were the key topics for me?
1. AI is coming everywhere, and Cybersecurity is not an exception.
Hackers and Red teams using AI to find bugs, Defenders and Cybersecurity vendors using AI to defend. If you or your company are not using AI to find bugs or defend against them, you will become outdated and replaced very soon.
2. Cybersecurity community is growing day by day.
The demand for cybersecurity professionals is at all times high; companies that haven't done cyber before, such as SHERIFF, are now entering the market to defend not only security but also cybersecurity, as this is an inseparable element of privacy and safety nowadays. The wars start with cyber reconnaissance. The power grids, hospitals, schools, and business got attacked in the cybersecurity field. It's easier to apply, and it is not a head-on conflict as in physical space; it is abused a lot.
3. People are the weakest link in your cybersecurity chain (as it always been).
Global companies and governments got hacked because someone installed some suspicious Chrome spyware that stole the password to a corporate or gov account. People click on phishing links, not even knowing what they are or that there are emails, links, and attachments that should never be opened. Educate, educate, and then repeat. Regular cybersecurity speaking corners and mini-courses are a must nowadays. It’s not just about your company’s privacy and security; it’s about your personal privacy and security, too.
4. Ukraine is outpacing Europe in cyberspace.
Cybersecurity companies and professionals from Ukraine are growing fast, and government agencies are strong and cyber-resilient. Ukraine is already outpacing Europe in the Cyberspace, and is catching up to the United States very quickly. The professionals from Ukraine are in demand, and the companies are ready to pay top dollar for their experience.
Have you been to the event? What stood out to you?😑
@securediary
Kyiv International Cyber Resilience Forum was a blast. I have not yet seen so many cyber people in one place. This is one of the biggest Cybersecurity events in Ukraine to date.
The amount and intensity of the networking was unbelievable. Since I came to the forum at 11:00 a.m. I could not attend any of the panels or stages for the whole 2-2.5 hours, purely because of the number of people I knew and wanted to talk to.
The discussions just kept going, and I loved it.
The networking was clearly the main feature of the event. The people from Ukraine's Gov Cyberdefence, Startups, European Gov representatives, and Global startups.
The event was a "Cybersecurity Networking Academy Award" winner.
👇 What were the key topics for me?
1. AI is coming everywhere, and Cybersecurity is not an exception.
Hackers and Red teams using AI to find bugs, Defenders and Cybersecurity vendors using AI to defend. If you or your company are not using AI to find bugs or defend against them, you will become outdated and replaced very soon.
2. Cybersecurity community is growing day by day.
The demand for cybersecurity professionals is at all times high; companies that haven't done cyber before, such as SHERIFF, are now entering the market to defend not only security but also cybersecurity, as this is an inseparable element of privacy and safety nowadays. The wars start with cyber reconnaissance. The power grids, hospitals, schools, and business got attacked in the cybersecurity field. It's easier to apply, and it is not a head-on conflict as in physical space; it is abused a lot.
3. People are the weakest link in your cybersecurity chain (as it always been).
Global companies and governments got hacked because someone installed some suspicious Chrome spyware that stole the password to a corporate or gov account. People click on phishing links, not even knowing what they are or that there are emails, links, and attachments that should never be opened. Educate, educate, and then repeat. Regular cybersecurity speaking corners and mini-courses are a must nowadays. It’s not just about your company’s privacy and security; it’s about your personal privacy and security, too.
4. Ukraine is outpacing Europe in cyberspace.
Cybersecurity companies and professionals from Ukraine are growing fast, and government agencies are strong and cyber-resilient. Ukraine is already outpacing Europe in the Cyberspace, and is catching up to the United States very quickly. The professionals from Ukraine are in demand, and the companies are ready to pay top dollar for their experience.
Have you been to the event? What stood out to you?
@securediary
Please open Telegram to view this post
VIEW IN TELEGRAM
1👍12
Would you join a workshop like this?
Security Architecture in Practice: From Attacks to System Defense — How to Think like a Senior/Architect.
Security Architecture in Practice: From Attacks to System Defense — How to Think like a Senior/Architect.
Anonymous Poll
75%
23%
5%
5%
Your topic (in comments)
👍5
Pentagon just labeled one of the world's top AI vendors a "supply chain risk," what does that make your enterprise AI strategy?
Secretary of Defense Pete Hegseth just advised the United States Department of War to officially label Anthropic as a supply chain risk.
This is a huge wake-up call for everyone in the industry.
We’re moving past the days when “AI is cool” and heading straight into “AI is a major third-party risk.”
Right now, corporate developers are hardwiring third-party AI models into production environments without a second thought. The "SolarWinds of AI" won't look like a traditional network breach - it will look like a compromised model or coding assistant quietly stealing your ideas and hard work.
Ironically, a couple of days prior, severe RCE and API key theft flaws were patched in Claude Code.
The lines between vendor risk, AI risk, and traditional AppSec have blurred.
Analyze your AI risks diligently, or pay with your company’s reputation.
Are you using AI for your work? 🤔
For the #CyberMonday News and CVE alert, see the first comment. 👇
@securediary
Secretary of Defense Pete Hegseth just advised the United States Department of War to officially label Anthropic as a supply chain risk.
This is a huge wake-up call for everyone in the industry.
We’re moving past the days when “AI is cool” and heading straight into “AI is a major third-party risk.”
Right now, corporate developers are hardwiring third-party AI models into production environments without a second thought. The "SolarWinds of AI" won't look like a traditional network breach - it will look like a compromised model or coding assistant quietly stealing your ideas and hard work.
Ironically, a couple of days prior, severe RCE and API key theft flaws were patched in Claude Code.
The lines between vendor risk, AI risk, and traditional AppSec have blurred.
Analyze your AI risks diligently, or pay with your company’s reputation.
Are you using AI for your work? 🤔
For the #CyberMonday News and CVE alert, see the first comment. 👇
@securediary
👍5
TV Show: Burnt out and happy 🔥
Julia: Vlad, tell me how your day passes?
Vlad: Nothing special, I wake up at 5 am, then I work till 12 pm on the first full-time, then from 12 pm till 8 pm on the second full-time, and after 8 pm, that is it. I rest.
Julia: Oh, finally, after 8 pm, you have rest?
Vlad: No, I mean after 8 pm, I have a quick part-time job, a couple of tasks done, and $100 in your pocket.
Resonates with you? 🙂
@securediary
Julia: Vlad, tell me how your day passes?
Vlad: Nothing special, I wake up at 5 am, then I work till 12 pm on the first full-time, then from 12 pm till 8 pm on the second full-time, and after 8 pm, that is it. I rest.
Julia: Oh, finally, after 8 pm, you have rest?
Vlad: No, I mean after 8 pm, I have a quick part-time job, a couple of tasks done, and $100 in your pocket.
Resonates with you? 🙂
@securediary
Media is too big
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
👍3🤯3🤔1
A few days ago, a friend from United24 Media asked me a simple question.
"What cybersecurity course should I take as a journalist who could be targeted?"
He is not a technical person.
Just someone who doesn't want to get hacked.
And honestly, this is a much harder question than it sounds.
Most cybersecurity courses are built for people who already understand the basics.
But journalists, researchers, activists, and NGO teams face very real targeted attacks, especially when covering geopolitics.
So they don’t need a 40-hour course on cryptography.
They need to understand things like:
• How spear-phishing actually works
• Why browser security matters more than people think
• What VPNs really do (and what they don’t do)
• The real differences between messaging apps like Signal, Telegram, and WhatsApp
• How accounts actually get taken over
• How attackers exploit trust between journalists and sources
So I shared a few resources I like for non-technical people:
1️⃣ Electronic Frontier Foundation (EFF) – Surveillance Self-Defense
One of the best practical guides for activists and journalists.
2️⃣ Google Security Tips
Simple but surprisingly solid security awareness training.
3️⃣ Freedom of the Press Foundation – Digital Security Training
Very relevant if you work with sources or sensitive information.
Usually, most successful "hacks" don’t involve sophisticated malware.
They involve people being busy, tired, or trusting the wrong email.
If you are a non-tech person - I'm curious:
What cybersecurity topics would be most useful for you?
@securediary
"What cybersecurity course should I take as a journalist who could be targeted?"
He is not a technical person.
Just someone who doesn't want to get hacked.
And honestly, this is a much harder question than it sounds.
Most cybersecurity courses are built for people who already understand the basics.
But journalists, researchers, activists, and NGO teams face very real targeted attacks, especially when covering geopolitics.
So they don’t need a 40-hour course on cryptography.
They need to understand things like:
• How spear-phishing actually works
• Why browser security matters more than people think
• What VPNs really do (and what they don’t do)
• The real differences between messaging apps like Signal, Telegram, and WhatsApp
• How accounts actually get taken over
• How attackers exploit trust between journalists and sources
So I shared a few resources I like for non-technical people:
1️⃣ Electronic Frontier Foundation (EFF) – Surveillance Self-Defense
One of the best practical guides for activists and journalists.
2️⃣ Google Security Tips
Simple but surprisingly solid security awareness training.
3️⃣ Freedom of the Press Foundation – Digital Security Training
Very relevant if you work with sources or sensitive information.
Usually, most successful "hacks" don’t involve sophisticated malware.
They involve people being busy, tired, or trusting the wrong email.
If you are a non-tech person - I'm curious:
What cybersecurity topics would be most useful for you?
@securediary
1👍10
Claude AI Finds 22 Firefox Flaws, AI-Written Multi-Stage Attack.
Just another week in cybersecurity 🤷♂️.
Anthropic reportedly uncovered 22 new Firefox vulnerabilities in partnership with Mozilla. A strong signal that AI is becoming a real force multiplier for defenders.
The flip side is... Pakistan-based threat groups, such as Transparent Tribe, are also using AI to accelerate malware development. Chaining scripts, loaders, and social engineering - all combined into multi-stage attacks that look increasingly efficient and scalable.
AI isn’t just about making people more productive. It now gives an edge to those who use it well.
🔥 Top News to keep an eye on:
→ AI-assisted malware campaigns targeting India
→ Iranian-linked activity using the new Dindoor backdoor against U.S. networks
→ China-linked telecom attacks in South America using TernDoor, PeerTime, and BruteEntry
→ Microsoft disclosed a ClickFix campaign using Windows Terminal
What caught your attention?
What do you want to hear about? 👇
@securediary
Just another week in cybersecurity 🤷♂️.
Anthropic reportedly uncovered 22 new Firefox vulnerabilities in partnership with Mozilla. A strong signal that AI is becoming a real force multiplier for defenders.
The flip side is... Pakistan-based threat groups, such as Transparent Tribe, are also using AI to accelerate malware development. Chaining scripts, loaders, and social engineering - all combined into multi-stage attacks that look increasingly efficient and scalable.
AI isn’t just about making people more productive. It now gives an edge to those who use it well.
→ AI-assisted malware campaigns targeting India
→ Iranian-linked activity using the new Dindoor backdoor against U.S. networks
→ China-linked telecom attacks in South America using TernDoor, PeerTime, and BruteEntry
→ Microsoft disclosed a ClickFix campaign using Windows Terminal
What caught your attention?
What do you want to hear about? 👇
@securediary
Please open Telegram to view this post
VIEW IN TELEGRAM
1👍4
Congratulations to UFORCE on this remarkable achievement! It's inspiring to see such innovation in the Ukrainian Defence Tech sector.
Ukrainian people and companies - you inspire all of us 🫰
Ukrainian people and companies - you inspire all of us 🫰
Original post:
Today, UFORCE steps out of stealth.
We've been building quietly — unifying the best Ukrainian defence tech operators and engineers with Silicon Valley-grade product thinking to create and integrate the world's most battle-proven autonomous weapons platform. hashtag#Magura autonomous USVs, hashtag#Nemesis & R18 heavy-bomber UAVs, Liut UGV ground combat robots, Predator remote-controlled turret systems, and USS C2 integrated command-and-control — all iterated under fire across over 150,000 real combat missions in Ukraine. Our open platform architecture ties it all together, enabling third-party interceptor integration directly onto our unmanned platforms for sea-launched counter-UAS missions with any compatible effector.
Since Russia's full-scale invasion in 2022, Ukraine became the most demanding proving ground for modern warfare. UFORCE was built to make sure the technologies forged in that environment don't stay on one battlefield — they scale to defend every allied nation that needs them.
The numbers behind what we've built: nine-figure bookings in 2025, nearly 500% growth, a secure supply chain spanning 15+ locations across six allied countries, and a team of over 1,000 operators, engineers, and manufacturers who ship product updates in hours, not quarters.
We're backed by Shield Capital, Lakestar, Ballistic Ventures, and leading American and European defence investors who share our conviction that combat-proven autonomous systems will reshape how free nations defend themselves.
Our leadership:
→ Oleg Rogynskyy, CEO — built People.ai into a $1.1B Silicon Valley AI company. Awarded the Order of Merit of Ukraine by President Zelensky for securing Ukraine's access to commercial satellite intelligence in the early days of the war.
→ Oleksiy Honcharuk, Board Chairman — served as Prime Minister of Ukraine from 2019 to 2020 and has contributed to a number of critical defence projects in Ukraine.
→ Sir Ben Wallace, Board Member — former UK Secretary of State for Defence who reshaped Britain's military posture and helped lead NATO's response to Russia's invasion.
This is day one. Follow UFORCE to see what comes next.
This media is not supported in your browser
VIEW IN TELEGRAM
❤4