Our Mission

To build AI systems that are safe, aligned, and beneficial. We believe the most important work in AI is ensuring AI remains safe as capabilities advance.

Founding Story

Astroware was born from a simple observation: as AI systems become more capable, the gap between their abilities and their safety guarantees grows wider.

Our founders came together from Oracle, Lloyds Banking Group, and the decentralized AI ecosystem — each having witnessed firsthand the challenges of deploying AI in high-stakes environments.

We set out to build what we couldn't find: production-ready guard models, rigorous evaluation frameworks, and security layers that actually work in the real world.

Today, Astroware is proud to serve enterprises, AI labs, and governments working on some of the most important AI safety challenges of our time.

Building the future of AI safety

Our Team

A

Alex Chen

Founder & CEO

Former Oracle engineering lead with 15 years in distributed systems. Now focused on AI safety infrastructure.

S

Sarah Williams

CTO

Ex-Lloyds Banking Group security architect. Specialized in financial systems security and AI governance.

M

Marcus Rodriguez

Head of Research

PhD in Machine Learning from Stanford. Published extensively on constitutional AI and alignment techniques.

E

Elena Volkov

VP Engineering

Led autonomous systems teams at multiple AI startups. Building the next generation of guard models.

Our Values

Safety First

We believe AI safety is not an afterthought — it's the foundation upon which all progress is built.

Rigorous Research

We publish peer-reviewed work and open-source our tools. Claims are backed by benchmarks and data.

Practical Impact

We build systems that work in production. Research that stays in the lab has limited value.

Collaborative Future

AI safety requires collective action. We partner with labs, enterprises, and governments worldwide.

Advisory Board

World-Class Expertise

Dr. James Liu

AI Ethics Advisor

former Stanford HAI Director

Led AI safety initiatives at Stanford Institute for Human-Centered AI. Published 40+ papers on AI alignment.

Priya Sharma

Security Advisor

Ex-CISO, DeepMind

Former Chief Information Security Officer at DeepMind. 20+ years in enterprise security and adversarial defense.

Dr. Michael Torres

Alignment Research Advisor

Anthropic, Constitutional AI Lead

Research lead on Anthropic's Constitutional AI work. Co-author of RLHF and AI alignment breakthroughs.

Rebecca Chen

Strategic Advisor

Partner, a16z crypto

Investor at Andreessen Horowitz focusing on decentralized AI and crypto infrastructure. Previously led investments in BitTensor.

Join Us

We're always looking for talented researchers and engineers passionate about AI safety.

Get in Touch