Leave us your email address and we'll send you all the new jobs according to your preferences.

Sr. Data Engineering

Posted 45 minutes 55 seconds ago by Fulcrum Digital

Permanent
Full Time
I.T. & Communications Jobs
Dublin, Ireland
Job Description
Who Are We

Fulcrum Digital is an agile and next-generation digital accelerating company providing digital transformation and technology services right from ideation to implementation. These services have applicability across a variety of industries, including banking & financial services, insurance, retail, higher education, food, healthcare, and manufacturing.

Role Overview

We are looking for an experienced and highly skilled Senior Data Engineer to design, develop, and optimize scalable data platforms and pipelines. The ideal candidate will have strong expertise in PySpark, AWS, and Databricks, with a proven track record of building enterprise-grade data solutions that support analytics, reporting, and machine learning initiatives.

This role requires close collaboration with data architects, analysts, data scientists, and business stakeholders to deliver high-quality, reliable, and secure data products.

Key Responsibilities
  • Design, build, and maintain scalable batch and real-time data pipelines.
  • Develop robust ETL/ELT workflows using PySpark and Databricks.
  • Implement and manage cloud-native data solutions on AWS.
  • Optimize data processing performance, scalability, and cost efficiency.
  • Build and maintain data lakes, data warehouses, and lakehouse architectures.
  • Ensure data quality, governance, security, and compliance standards are met.
  • Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions.
  • Automate deployment and monitoring using CI/CD and infrastructure-as-code practices.
  • Troubleshoot and resolve production data issues and pipeline failures.
  • Mentor junior engineers and contribute to engineering best practices.
Required Skills & Qualifications
  • Bachelor's or Master's degree in Computer Science, Engineering, Information Systems, or related field.
  • 6+ years of experience in Data Engineering or Big Data development.
  • Strong hands on experience with:
    • PySpark
    • AWS services (S3, Glue, EMR, Lambda, Redshift, IAM, CloudWatch, etc.)
    • Databricks
  • Strong SQL and data modeling skills.
  • Experience with distributed data processing and large scale datasets.
  • Expertise in building ETL/ELT pipelines and orchestration frameworks.
  • Experience with workflow orchestration tools such as Airflow.
  • Strong understanding of data lake/lakehouse architecture.
  • Experience with Git, CI/CD pipelines, and DevOps practices.
  • Knowledge of performance tuning and optimization techniques in Spark.
  • Excellent problem solving and communication skills.
Preferred Qualifications
  • Experience with Delta Lake and Databricks optimization techniques.
  • Knowledge of streaming technologies such as Kafka or Spark Streaming.
  • Exposure to Terraform or CloudFormation.
  • Experience working in Agile/Scrum environments.
  • AWS or Databricks certifications are a plus.
  • Strong analytical and troubleshooting skills
  • Ownership and accountability
  • Collaboration and stakeholder management
  • Ability to work in fast paced environments
Email this Job