We are a highly successful company with great ambitions. We operate on a very competitive market so every day we are looking for opportunities to be better. To be faster. Even faster. Never stand aside and never afraid to try. Having a lot of ideas we are very open to fresh ones. Equally important, we have the resources to bring these into motion.
We invite those who fired up to:
— Want to build next generation data platform;
— Work directly with data engineers helping them to improve data lake;
— Participate in project architecture and design creation, code review, CI/CD;
— Test new approaches by creating PoC, performing benchmarks;
— Communicate effectively with business stakeholders;
— Synthesize relevant information on key milestones, success criteria, and risks.
Essential professional experience:
— Strong knowledge of Python 3 (4+ years of experience);
— Software Design knowledge: OOD, OOP, Design patterns, Microservices;
— Experience in building ETL distributed systems, with Airflow;
— Experience with SQL databases with more than a billion rows;
— Knowledge of both SQL and NoSQL databases (PostgreSQL is preferred), including core concepts of database engines (B-— Tree, locks, MVCC, write-ahead logging, partitioning, sharding, columnar storage);
— Understanding of queueing mechanisms (Kafka, RabbitMQ);
— High level of personal responsibility, readiness to commit to products instead of tasks;
— Knowledge of Kubernetes.
— Willingness to learn new things and the pursuit of constant improvement;
— Ability to communicate advanced technical concepts to non-technical audiences;
— Team and process-oriented spirit;
— AWS expertise.