AWS Data Engineer
Posted 20 days 17 hours ago by Technopride Ltd
£60,000 - £80,000 Annual
Permanent
Full Time
Other
London, United Kingdom
Job Description
London, United Kingdom Posted on 04/12/2025
We provide end-to-end IT solutions and services including Applications services, Data & Analytics services, AI/ML Technologies and Professional services in the UK and EU market.
Job Description(10+ years of experience required)
Role OverviewWe are building a next-generation data platform and are looking for an experienced Senior Data Engineer to help design, develop, and optimize large-scale data solutions. This role involves end-to-end data engineering, modern cloud-based development, and close collaboration with cross-functional stakeholders to deliver reliable, scalable, and high-quality data products.
Key Responsibilities- Design, develop, and maintain scalable, testable, and high-performance data pipelines using Python and Apache Spark.
- Orchestrate data workflows using cloud-native services such as AWS Glue, EMR Serverless, Lambda, and S3.
- Apply modern engineering practices including modular design, version control, CI/CD automation, and comprehensive testing.
- Support the design and implementation of lakehouse architectures leveraging table formats such as Apache Iceberg.
- Collaborate with business stakeholders to translate requirements into robust data engineering solutions.
- Build observability and monitoring into data workflows; implement data quality checks and validations.
- Participate in code reviews, pair programming, and architecture discussions to promote engineering excellence.
- Continuously expand domain knowledge and contribute insights relevant to data operations and analytics.
- Strong ability to write clean, maintainable Python code using best practices such as type hints, linting, and automated testing frameworks (e.g., pytest).
- Deep understanding of core data engineering concepts including ETL/ELT pipeline design, batch processing, schema evolution, and data modeling.
- Hands-on experience with Apache Spark or willingness and capability to learn large-scale distributed data processing.
- Familiarity with AWS data services such as S3, Glue, Lambda, and EMR.
- Ability to work closely with business and technical stakeholders and translate needs into actionable engineering tasks.
- Strong team collaboration skills, especially within Agile environments, emphasizing shared ownership and high transparency.
- Experience with Apache Iceberg or similar lakehouse table formats (Delta Lake, Hudi).
- Practical exposure to CI/CD tools such as GitLab CI, GitHub Actions, or Jenkins.
- Familiarity with data quality frameworks such as Great Expectations or Deequ.
- Interest or background in financial markets, analytical datasets, or related business domains.