AI Engineer
Posted 1 day 12 hours ago by Advantage Resourcing UK Ltd
Certain Advantage are recruiting on behalf of our Trading client for an AI Engineer on a contract basis for 6-12 months initially in London. This will require some onsite days in Central London during the week.
We are seeking Engineers skilled in python with a strong focus on GenAI AI and LLMs to lead the integration of cutting-edge language technologies into real-world applications.
If you're someone passionate about building scalable, responsible, and high-impact GenAI solutions then this could be for you!
We're looking for Engineers offering competent core technical skills in Python Programming, Data Handling with NumPy, Pandas, SQL, and use of Git/GitHub for version control.
Any experience with these GenAI Use Cases would be relevant and desirable; Chatbots, copilots, document summarisation, Q&A, content generation.
System Integration & Deployment- Model Deployment: Flask, FastAPI, MLflow
- Model Serving: Triton Inference Server, Hugging Face Inference Endpoints
- API Integration: OpenAI, Anthropic, Cohere, Mistral APIs
- LLM Frameworks: LangChain, LlamaIndex - for building LLM-powered applications
- Vector Databases: FAISS, Weaviate, Pinecone, Qdrant (Nice-to-Have)
- Retrieval-Augmented Generation (RAG): Experience building hybrid systems combining LLMs with enterprise data
- MLOps: Model versioning, monitoring, logging
- Bias Detection & Mitigation
- Content Filtering & Moderation
- Explainability & Transparency
- LLM Safety & Guardrails: Hallucination mitigation, prompt validation, safety layers
- Azure Cloud Experience
- Cross-functional Collaboration: Working with software engineers, DevOps, and product teams
- Rapid Prototyping: Building and deploying MVPs
- Understanding of ML & LLM Techniques: To support integration, scaling, and responsible deployment
- Prompt Engineering: Designing and optimising prompts for LLMs across use cases
- Evaluation Metrics: Perplexity, relevance, response quality, user satisfaction
- Monitoring in Production: Drift detection, performance degradation, logging outputs
- Evaluation Pipelines: Automating metric tracking via MLflow or custom dashboards
- A/B Testing: Experience evaluating GenAI features in production environments
Does this sound like your next career move? Apply today!