UserLeap enables product teams to know, in real-time, what motivates customers to sign up, engage, and remain loyal to their product. Through a combination of contextual microsurveys, AI-based text analysis, and 75+ research templates, the platform helps any product and research team quickly and easily understand their audience, improve their product, build their roadmap, and solve complex business problems.
UserLeap is based in San Francisco, CA. The company has raised $60M led by Andreessen Horowitz, Accel, and First Round Capital, and has recently been featured in articles by TechCrunch and Business Insider. Customers include Dropbox, Square, OpenDoor, Loom, Shift, and Codecademy.
This is your chance to join a startup in one of the most exciting phases, where you can become an original, founding member of the team and play a vital part in our growth. We’re looking to quickly grow our engineering team and looking for an experienced Sr. Data Engineer to help us scale. This position is based in San Francisco, CA or Austin, TX.
ABOUT the role
In this role you’ll be working closely with various members of the team to update our data engineering roadmap.
- Lead numerous critical data engineering projects to ensure pipelines are reliable, efficient, testable, & maintainable
- Evangelize high quality software engineering practices towards building data infrastructure and pipelines at scale.
- Arbitrate critical decisions correctly considering data best practices, system realities, and numerous stakeholders’ feedback
- Competitive compensation and equity package
- Free base medical, dental, and vision insurance plus free membership to One Medical
- 401k program
- 21 PTO days per calendar year + sick days when you need it
- Annual professional learning and development budget
- Work from home equipment stipend
- Flexible in-office policy when we return
- Quarterly socials and recharge days
- Autonomy to make decisions in a rapidly growing environment
- 5+ years of relevant experience
- B.A. or M.A. degree in Computer Science
- Experience designing, building and operating robust distributed systems.
- Experience designing and deploying high performance systems with reliable monitoring and logging practices.
- Past experience building ETL processes
- Experience with data integrations such as informatica, python, spark, etc., data warehousing like Redshift, AWSGlue, Snowflake, Postgres, DynamoDB, Cassandra, etc., & analytics tools like Tableau, Mode, Looker, & scheduling tools like Airflow.