hero

Opportunities at Craft portfolio companies

176
companies
1,264
Jobs

Senior Software Engineer, Data Platform

SentiLink

SentiLink

Software Engineering
United States · Remote
Posted on Mar 3, 2025

Role:

As a Sr. Software Engineer, Data Platform at SentiLink, you will own the data infrastructure components that support the SentiLink suite of products. You will work with product, engineering, and data science teams across the company to build, enhance and modify the data platform that powers our fraud detection products. You have outstanding programming skills and are proficient in our technology stack and pick up new technologies quickly as we evolve.

Technologies: Python, Golang, PostgreSQL (RDS), OpenSearch, Redshift, EMR, Spark, Docker, Lambda, AWS technologies

This is a remote, US-based role.

Responsibilities:

  • Build, expand, and optimize data infrastructure in order to create the most accurate dataset of identities and their relationships

  • Develop and operate secure, scalable, and reliable data ingestion and ETL/ELT pipelines that meet product requirements

  • Design and maintain a data observability framework to ensure our data meets strict quality and freshness standards

  • Optimize data storage layer and build/maintain interfaces to enable scalable and fast data access to our data stores

  • Collaborate with product teams (squads) supporting their data platform needs for smooth delivery of existing and new products

  • Participate in call rotation for production issues, along with the rest of engineering

  • Mentor junior engineers and contribute to our evolving engineering best practices

  • Drive innovation by actively participating in our hackathons and doing proof of concepts

  • Develop functional subject matter expertise within various areas of identity fraud domain

Requirements:

  • 5+ years of experience in software engineering, data engineering or related field

  • Proficient in in python or golang and related technologies and frameworks

  • Expertise in building and maintaining ETL/ELT pipelines at scale leveraging distributed data processing technologies like spark, hadoop, kafka or similar technologies

  • Hands-on experience with public cloud platforms such as AWS, Microsoft Azure or GCP

  • Deep understanding of different database technologies including but not limited to RDBMS (e.g. postgres), NoSQL (OpenSearch, vector DB), Columnar data stores etc. and experience with writing efficient queries and optimization techniques

  • Experience building enterprise grade, scalable, containerized data services and frameworks on Kubernetes or similar platforms

  • Working knowledge of Infrastructure-as-Code and devops practices

  • Excellent analytical and problem solving skills, interpersonal skills and a sense of humor (enjoy the journey)

  • Self organized and ability to work independently and with ambiguity

  • Experience working in a scrum / Agile development environment

  • Bonus points if you have:

    • Experience working with Spark/EMR,

    • Built streaming applications,

    • Experience with AWS technologies such as EKS, SQS/SNS, EMR, Redshift, S3 etc., and/or

    • Prior experience working in a fintech startup.

  • Candidates must be legally authorized to work in the United States and must live in the United States

Salary Range:

  • $165,000/year - $200,000/year