Attend our last event of 2023. Click here to register

Associate Data Platform Engineer (Junior)

Hybrid @Depop posted 6 days ago

Job Description

TO APPLY:

PLEASE SEND YOUR CV TO [email protected] WITH THE SUBJECT LINE ‘BCC Application – Associate Data Platform Engineer’

We are seeking an Associate Data Platform Engineer to join our Data Infrastructure team, and help us build our data platform for analytics, machine learning, marketing and much more.

 

We’re building scalable and robust systems to harvest, process and analyse the vast data within our tech ecosystem. With an increasing demand to service other areas of the business, and ultimately our users, you’ll be at the forefront of pioneering Data-as-a-Service.

 

Want to find out more about Depop & our engineering team? We write about technology, people and smart engineering right here – https://engineering.depop.com/

 

Responsibilities
As an Associate Data Platform Engineer within this team, you can expect to:

  • Play an integral role in owning initiatives for our Data Platform – working closely with our data scientists, analysts, analytics engineers and other engineers to support their deployment speed and productivity needs with self-serve data transformation and processing tools (dbt, Databricks, Airflow).
  • Successful end-to-end delivery of your team’s projects; from scoping and translating business requirements into plans, to design, implementation and maintenance, whilst coordinating with other teams (technical and non-technical users).
  • Proactively identify ways to improve data processes, discovery and ownership, navigating complex challenges as our data grows and becomes an integral piece of our business and product operations.
  • Embrace agile methodologies
  • Engage in a culture of continuous improvement by attending events such as blameless post-mortems, architecture reviews etc.
  • Engage in health and performance improvements of our data platform and work towards promoting company-wide best practices to allow for their scalable growth by striving for automation, writing clear documentation, tutorials and hosting training sessions.
  • Hold high standards for operational excellence; from running your own services to testing, monitoring, maintenance and reacting to production issues.
  • Adding to a strong engineering culture, orientated on technical innovation, and professional development.

 

Requirements

  • A strong sense of ownership, autonomy and a highly organised nature.
  • Excellent written and spoken English communication skills
  • Comfortable working in a fast-paced environment and able to respond to change or uncertainty with a positive attitude and a willingness to learn.
  • Familiarity with a high-level programming language (e.g. Python, Scala).
  • You have had some experience using version control such as Git, or similar.
  • Passionate about working on a self-service data platform and playing an integral role in designing and creating tools to increase user productivity and velocity across our data organisation.
  • You have a passion for learning new things and keeping on top of the latest developments and technologies in our field. We take pride in our learning and make sure to have dedicated time set aside for our growth and development (we offer personal development time and other platforms to share knowledge with your peers!).

 

What You’ll Learn

  • Workflow management and data processing tools such as Airflow, Databricks, dbt or similar.
  • Datalake ingestion platforms, focusing on optimising and monitoring the ingestion flows, compute, storage, governance, privacy and more.
  • Data domain: working closely to enable advanced data users; data scientists, analysts or analytics engineers and have a good grasp of their needs and how they operate.
  • Python/Scala
  • DevOps methodologies – building CI/CD pipelines (Jenkins), IaC (Terraform), observability, technical documentation authoring.
  • The inner workings and tradeoffs of Data Lake table/file formats such as Delta Lake and Parquet.
  • Gain experience working with distributed data processing technologies such as Spark.
  • Software engineering best practices, such as testing, clean coding standards, code reviews, pair programming, automation-first mindset.
  • Containerisation technologies, Docker, Kubernetes
  • Knowledge of systems design within a modern cloud-based environment (AWS)
  • Data pipelines optimization – query performance, monitoring
  • Shell scripting and related tooling

Related Jobs

Black professional looking for a new opportunity?

Please help us help you by taking this 3 minute survey so we can connect you to the best opportunities