Attend our last event of 2023. Click here to register

Data Platform Engineer

Hybrid @Depop posted 1 month ago

Job Description

TO APPLY:

PLEASE SEND YOUR CV TO [email protected] WITH THE SUBJECT LINE ‘BCC Application – Data Platform Engineer’

The Role:

We are looking for an experienced Data Platform Engineer to join our Data Infrastructure team. You will play a key role in building and enhancing our data platform to support analytics, machine learning, marketing, and other critical functions.

We’re building scalable and robust systems to harvest, process and analyse the vast data within our tech ecosystem. With an increasing demand to service other areas of the business, and ultimately our users, you’ll be at the forefront of pioneering Data-as-a-Service.

Want to find out more about Depop & our Engineering team? Visit our blog where we write about technology, people and smart engineering right here – https://engineering.depop.com/

Responsibilities:
You will collaborate with teams across Depop, including Insights, Analytics Engineers, Data Scientists, MLOps, MarTech, and Data Engineering, to address our growing information needs and complex business challenges. Your role will involve building and promoting self-service tools and data best practices, as well as managing our data transformation and overseeing our Datalake.

  • Owning initiatives for our Data Platform – working closely with our data scientists, analysts, analytics engineers and other engineers to support their deployment speed and productivity needs with self-serve data transformation and processing tools.
  • End-to-end delivery of your team’s projects; from scoping and translating business requirements into plans, to design, implementation and maintenance, whilst coordinating with other teams (technical and non-technical users).
  • Proactively identify ways to improve data processes, discovery and ownership, navigating complex challenges as our data grows and becomes an integral piece of our business and product operations.
  • Embrace agile methodologies
  • Engage in a culture of continuous improvement by attending events such as blameless post-mortems, architecture reviews etc.
  • Engage in health and performance improvements of our data platform and work towards promoting company-wide best practices to allow for their scalable growth by striving for automation, writing clear documentation, tutorials and hosting training sessions.
  • Hold high standards for operational excellence; from running your own services to testing, monitoring, maintenance and reacting to production issues.
  • Adding to a strong engineering culture, orientated on technical innovation, and professional development.

 

Requirements:

  • Experience with a high-level programming language (e.g. Python, Scala).
  • Proficient with Software engineering best practices, such as testing, clean coding standards, code reviews, pair programming, automation-first mindset.
  • Strong data domain background – have worked closely to enable advanced data users; data scientists, analysts or analytics engineers and have a good grasp of their needs and how they operate.
  • Passionate about working on a self-service data platform and playing an integral role in designing and creating tools to increase user productivity and velocity across our data organisation.
  • Comfortable working in a fast-paced environment and able to respond to change or uncertainty with a positive attitude and a willingness to learn.
  • A strong sense of ownership, autonomy and a highly organised nature.
  • You have a passion for learning new things and keeping on top of the latest developments and technologies in our field.

Nice to haves:

  • Experience working on Datalake ingestion platforms
  • Experience managing and integrating tools such as Airflow, Databricks, dbt or similar.
  • Knowledge of systems design within a modern cloud-based environment (AWS, GCP).
  • Good understanding of at least one of the following Data Lake table/file formats: Delta Lake, Parquet, Iceberg, Hudi.
  • Previous experience working with Spark.
  • Experience working with containerisation technologies – Docker, Kubernetes.
  • DevOps experience building CI/CD pipelines (Jenkins), IaC (Terraform), observability, and technical documentation authoring.
  • Experience with event-driven architectures, preferably using RabbitMQ or Kafka
  • An understanding of modern data-warehouses (such as BigQuery, Snowflake), data modelling best practices and data pipelines optimization – query performance and monitoring.
  • Shell scripting and related tooling.

Related Jobs

Black professional looking for a new opportunity?

Please help us help you by taking this 3 minute survey so we can connect you to the best opportunities