Skip to main content

Python Data Engineer at ComScore

Python Data Engineer at ComScore
ComScore (via CC)
remote
6 months ago

An ideal candidate would have:

  • 2+ years developing software using Python
  • 1+ years Big Data Technologies
  • Bachelor’s degree in computer science or related field (nice to have)
  • Linux, shell scripting
  • SQL skills
  • Strong analytical and troubleshooting skills
  • Professional working proficiency in English (both oral and written)

Job Opportunity at Comscore in Poland

Comscore is a global leader in media analytics, revolutionizing insights into consumer behavior, media consumption, and digital engagement. Comscore leads in measuring and analyzing audiences across diverse digital platforms. Thrive on using cutting-edge technology, play a vital role as a trusted partner delivering accurate data to global businesses, and collaborate with industry leaders like Facebook, Disney, and Amazon. Contribute to empowering businesses in the digital era across media, advertising, e-commerce, and technology sectors.

We offer:

  • Real big data projects - petabyte scale/thousands of servers / billion of events 🚀
  • An international team (US/PL/IE/IN/CL/NL) - slack+zoom+english is standard set 🌎
  • Hands-on experience - you have the power to execute your ideas and improve stuff
  • Quite a lot of autonomy in how to execute things
  • A small, independent teams' working environment 🧑‍💻
  • Flexible work time ⏰
  • Fully remote or in-office work in Wroclaw, Poland 🏢
  • UP to 16,000 PLN net/month B2B contract 💰
  • Private healthcare 🏥
  • Multikafeteria 🍽️
  • Free parking 🚗

Recruitment Process:

The recruitment process for the Python Data Engineer position has three steps:

  • Technical screening - 30 min
  • Technical interview - 1h
  • Interview with Manager - ~30 min

Responsibilities:

  • Define data models that ties together large datasets from multiple data sources (terabytes of data, hundreds of terabytes)
  • Design, implement and maintain Data pipelines and data driven solutions (using python) and Linux/AWS environment
  • Building data pipelines using Apache Airflow, Spark, AWS, EMR, Kubernetes, Kafka or whatever tool is needed.
  • Design and implement solutions using various databases with very specific flows and requirements. Apache druid.io, AWS S3, Cassandra, ElasticSearch, Clockhouse, Snowflake, PostgreSQL and more
  • Optimize – working with big data is very specific, sometimes it’s IO/CPU/network - bound, depending on process, and we need to figure out faster way of doing things. At least empirical knowledge of estimating computational complexity, as in big data, even simple operations, when you multiply by the size of dataset can be costly
  • Collaborate with DevOps and other teams to sustain solutions and integrations smooth
  • Coordinate with external stakeholders to deliver on product requirements

Requirements:

Python, Big data, Linux, SQL

Tools:

Jira, Confluence, Bitbucket, GIT, Jenkins, Agile, Kanban.

Additionally:

  • Private healthcare
  • Remote work
  • Flexible working hours
  • International projects
  • Small teams
  • Free parking
  • Bike parking
  • Playroom
  • Free coffee
  • Modern office
  • No dress code

Expertise level

Work arrangement

Similar Jobs in Poland

Similar Jobs in dolnoslaskie

Similar Jobs in Wrocław