You will be a member of one of the teams responsible for transforming data into insights that could be published via our products. You enjoy working on a wide range of features on an international multi disciplinary team. You are passionate about learning new things and care about being part of the design process, developing new features, maintaining existing code and contributing to the overall architecture design. You are a team player and speak your mind, you measure success not by the number of hours spent but by what is accomplished.
- Design, build and run data pipelines to source, ingest, integrate and publish data
- Continuous improvement applying engineering best practices to development, monitoring, and data quality of the data pipelines
- Identify new data sources that could be used for new features
- Coach the team on data handling best practices
- Proven relevant experience in a similar position
- Demonstrable ability to work creatively and analytically with a growth mindset
- Experience manipulating data using Python and maintaining pipelines
- Experience with relational databases
- Experience with Docker and Kubernetes orchestration is an advantage
- Understanding of basic data structures and algorithms
- Familiarity with some of the tools we use is a plus (Pandas, Luigi, Airflow, PySpark, PostGIS)
* We are interested in every qualified candidate who is eligible to work in the European Union but we are not able to sponsor visas.
Join an ambitious and hungry team and enjoy the following benefits:
💰 Competitive salary because we always want to attract the best talents.
📘 Learning & Development program - We want you to feel happy, confident about improving your skills, experience level as well as your personal development success.
🏢 Very well-located offices with a great remote work policy and the possibility to work from different places.
🕓 Flexible working hours and work life balance.