BidMachine is a high-performance RTB exchange built on a modern technology stack: Kubernetes, Prometheus, Scala, Kafka, Druid, Postgres, Spark, AWS. Every day we process petabytes of data, serve billions of ad requests for tens of millions of mobile application users around the world. We have more than 500 servers that work under 3 Data Centers worldwide.
- 3+ years of professional work experience;
- Strong expertise in Python 3 and common data frameworks;
- Experience with modern data storages: SQL, S3, lakes and warehouses;
- Experience with data orchestration tools (Dagster, Airflow, Prefect);
- Experience with streaming technologies (Kafka, Spark);
- Basic knowledge of data visualization and BI tools.
Nice to have
- Experience with Databricks platform;
- Experience with Druid (or any other OLAP);
- Basic ML and data analytics;
- Basic Scala and functional programming knowledge.
- Take responsibility for the in-house data platform and perform common tasks such as collecting, storing and transforming data;
- Design, develop, test and orchestrate data workflows;
- Bring your experience to the team and provide best practices for working with data;
- Take part in developing internal tools for data and business processes automation;
- Work closely with product and engineering teams to provide best data experience for all data practitioners and data consumers.