Staff Software Engineer - AI/ML (Business Systems Intelligence)
Posted 3 hours 7 minutes ago by OpenAsset - Axomic Ltd.
We are looking for a London-based Staff Software Engineer - AI/ML (Business Systems Intelligence) to join our talented, dynamic, and rapidly growing global team. We have an in-office policy of 2 days per week for local employees.
Company DescriptionOpenAsset is the leading marketing platform for the Architecture, Construction, and Engineering industries, trusted by 1,000+ clients over 20 years. Our mission is to be the most innovative partner to AEC firms, delivering solutions that help win more projects. We recently announced a new AI-Proposal product Shred.ai to continue this mission.
We're a diverse, collaborative, and fast-growing team of 100+ employees with offices in New York and London and a global client base. Backed by Marlin Equity Partners, we're passionate about creating an inclusive workplace where everyone feels valued and has a voice, and we actively hire from a diverse pool of candidates.
About the RoleWe're seeking a Staff Software Engineer - AI/ML (Business Systems Intelligence) to take ownership of our internal AI infrastructure and lead our newly formed AI Engineering Pod. This is a hands-on backend role where you'll design, implement, and scale AI-powered systems that connect enterprise SaaS data (Salesforce, Gong, Mixpanel, and more) with ChatGPT-5 and future AI models.
You'll be the point person and technical lead for this team, setting the standard for engineering excellence, building RAG (Retrieval Augmented Generation) pipelines, and enabling AI capabilities across the company. Beyond building systems, you'll drive strategy, internal enablement, and mentor engineers as we expand our use of AI for business intelligence and automation.
This is a foundational role with high visibility and impact: you will own the internal AI platform that makes enterprise data accessible, actionable, and intelligent.
Responsibilities Project Ownership & Leadership- Own the design, architecture, and delivery of internal AI/ML systems for business intelligence.
- Lead the planning, estimation, and execution of backend and AI/ML projects, ensuring scalable, secure, and reliable systems.
- Act as the point person for the AI Engineering Pod, representing the team across Engineering, Product, and Leadership.
- Build and maintain APIs, microservices, and data pipelines that enable LLM-powered tools to access and act on SaaS data.
- Implement retrieval-augmented generation (RAG) pipelines with vector databases for context-aware AI.
- Ensure strong backend foundations: scalability, observability, and maintainability.
- Design and deploy AI-enabled internal products that expand company capabilities (e.g. intelligent assistants, automated insights, cross-SaaS workflows).
- Own the internal AI enablement platform, providing reusable tools, frameworks, and APIs for other engineers and business teams.
- Partner with stakeholders to identify high-value internal use cases and deliver tailored AI solutions.
- Establish best practices for AI/ML development, deployment, and monitoring within internal systems.
- Mentor AI Application Engineers and other team members, guiding them on backend, API, and AI/ML integration patterns.
- Champion a culture of reliability, documentation, and continuous improvement.
- Be the owner-lead for all internal AI/ML systems built by this team, accountable for uptime, performance, and compliance.
- Ensure all systems adhere to security, privacy, and compliance standards.
- 7+ years of professional experience as a backend or full stack engineer, with deep expertise in backend systems.
- Deep understanding of LLM technologies, prompt engineering, and fine-tuning approaches.
- Experience with AI/ML pipelines, especially retrieval-augmented generation (RAG) and vector databases (e.g., Pinecone, FAISS, Weaviate).
- Proficiency in Python and at least one other backend language (Java, TypeScript, etc.).
- Expertise in cloud platforms (AWS preferred), microservices, and infrastructure as code (Terraform, Docker).
- Strong track record of owning and delivering large-scale backend platforms or internal systems.
- Strong knowledge of SaaS integrations and API development (REST, GraphQL).
- Excellent communication skills; able to translate technical work into business value and collaborate across teams.
- Prior experience as a tech lead or staff engineer leading a small pod or platform team.
- Exposure to multi-modal AI systems (text, images, transcripts).
- Experience with event-driven architectures and message queues (Kafka, SQS, etc.).
- Knowledge of AI safety and governance practices.
- Familiarity with MLOps frameworks (SageMaker, MLflow, Vertex AI).
- Languages: Python, TypeScript, Java
- AI/LLM: OpenAI (ChatGPT-5), Claude, Retrieval-Augmented Generation (RAG)
- Infrastructure: AWS (Bedrock, Lambda, S3, ECS), Terraform, Docker
- Databases: MySQL, PostgreSQL, Redis, Vector DBs (Pinecone, FAISS, Weaviate)
- DevOps: GitHub, GitHub Actions, CI/CD pipelines
- Competitive salary
- 25 paid vacation days
- 8 bank holidays
- 5 paid sick days
- SSP
- Work from home flexibility
- Paid parental leave
- 5-Day Working Abroad Policy per year
- Pension program
- Bike storage/shower facilities in building
- Career growth and development opportunities
This position is not eligible for visa sponsorship.
Axomic is an Equal Opportunity Employer. We base our employment decisions entirely on business needs, job requirements, and qualifications-we do not discriminate based on race, gender, religion, health, parental status, personal beliefs, veteran status, age, or any other status. We have zero tolerance for any kind of discrimination, and we are looking for candidates who share those values. Applications from women and members of underrepresented minority groups are welcomed.