Data Engineer (Scala/PostgreSQL/Kafka/Spark)

From: Chris Travers <chris(dot)travers(at)adjust(dot)com>
To: pgsql-jobs(at)postgresql(dot)org
Subject: Data Engineer (Scala/PostgreSQL/Kafka/Spark)
Date: 2019-08-23 08:56:28
Message-ID: CAN-RpxDX0V1pannQRJ3CLfQ5R4epNcbcv0eJxyeh8omZmQig-A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-jobs

As an introductory note on this particular application, our new
architecture for this product has been the subject of multiple conference
talks in the PostgreSQL community, and leverages PostgreSQL as a query
engine which queries parquet files (via our own foreign data wrapper)
generated by Spark jobs, allowing us to scale storage and query engines
separately. We would certainly give preference to candidates who have
strong PostgreSQL experience in this regard as well as the Scala/Spark side.

Data Engineer
Berlin

Adjust is a fast-growing mobile marketing analytics company. We build
business intelligence for mobile apps, placing a high premium on scientific
statistics, hand-in-glove UX, and lean, pragmatic product iteration. We
enable marketers to understand how their marketing campaigns are performing.

We are looking for a *Data Engineer* to join our Development Team in Berlin.

*What we offer you:*

- A competitive salary

- Flexibility in work schedule

- Relocation assistance

- An international team with strong focus on transparency

- Regular team gatherings and company retreats

- An opportunity to do office exchanges in other satellite locations

- Additional perks such as Friday team lunches and free access to our
company gym

More details about our company culture and perks can be found on our careers
page <https://www.unbotify.com/about/>.

*Your role:*

As Data Engineer you will be responsible for ensuring rapid ingestion of
data into our retargeting platform, called AudienceBuilder. You will help
us to scale from tens of TBs into multiple PBs of data while ensuring
maintainable, performant code. You will build and maintain queryable very
large data sets using Spark, Parquet, and PostgreSQL which includes
ingesting and compacting data in order to save space and improve query
performance. When a new feature is developed, you will translate business
requirements into technical specifications, and implement those. You will
analyze performance bottlenecks and optimize accordingly.

*Your tasks:*

- Design review and ensure scalability to hundreds of thousands of events
ingested per second

- Implement, test, and document components to ingest data to support new
features

- Provide escalation support for the data ingestion portion of the platform

- Work closely with our operations team to develop maintenance and
operational procedures as well as escalation paths

*Your profile:*

- Experience in development in a distributed environment

- Solid knowledge of Scala and/or Java

- Experience with Spark

- Flink and PostgreSQL experience is a plus

*Interested? Let’s Talk!*

*Application link: https://grnh.se/46638ad02 <https://grnh.se/46638ad02> *

--
Best Regards,
Chris Travers
Head of Database

Tel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com
Saarbrücker Straße 37a, 10405 Berlin

Browse pgsql-jobs by date

  From Date Subject
Next Message Laura Ricci 2019-08-28 09:30:54 Dalibo (France): Database Engineer Wanted!
Previous Message Chris Travers 2019-08-22 07:13:40 Database Engineer