About us: Fancy ETL pipeline which processes products from huge ecommerce companies. Data extraction and massage, delivery to destinations like Google/Meta/TikTok/etc. Profitable, 15+ yrs stable, 100% employee-owned. No VC, no pointless meetings, just serious coding.
Stack: Python/Django, JavaScript, VueJS, PostgreSQL, Snowflake, Docker, Git, AWS, AI/LLM integrations (OpenAI & Gemini).
Compensation: $150K–$220K USD/year DOE.
You: Senior dev who's seen (and fixed) enough dumpster-fire code to last a lifetime. Python/Django deeply internalized; ideally strong Vue (or React) skills. Git/Docker/REST are second nature. You’re the coder other devs come to when their stuff breaks: an architect-level thinker who’s rewritten ‘clever’ code into something that actually works. You play well with others and write code that’s easy to live with. Bonus: AI integrations, Py2→Py3 migrations, Snowflake (or Databricks) experience.
Timezones: Primary time zone is PST (standups at 9AM PST). Generally async-friendly but you must reside in the United States.
Benefits: 401K match, healthcare, equity, fully remote. Stable company with no time-wasters.
Apply: email jobs+hn254 [the-at-mark-thing] versafeed [the-period-thing] com
As a Data Engineer at Epic Kids, you will work closely with our development team, infrastructure team, and data team to design, build, and optimize data pipelines, ensuring data quality and security, while also collaborating with other teams to deliver effective data solutions.
Key Responsibilities:
Develop robust ETL/ELT pipelines to extract, transform, and load data from diverse sources into our data warehouse. Enhance and maintain our cloud-based data storage and processing systems for performance, reliability, and cost-efficiency. Implement rigorous data quality checks, monitoring, and security measures across all data assets. Proactive in identifying and addressing data inconsistencies and bottlenecks, continuously refining data infrastructure for robust and high-performing data solutions. Partners with data analysts and non-technical teams to understand data requirements and shape the development of effective data products. Job Qualifications: 5+ years of experience in data engineering, with a strong grasp of data warehousing, ETL/ELT principles, and data modeling. Experience with data storage solutions (e.g. relational, data lakes), cloud data platforms (e.g. GCP, AWS) and cloud-native data technologies (e.g. BigQuery, Snowflake). Experience with workflow orchestration tools (e.g. Airflow) Experience with infrastructure tools (e.g. Terraform, Kubernetes, Docker) is a plus. Salary Range: $150 to $200KIf you are a good SWE with bigquery + gcp experience that works too!
Please post your resume here: https://job-boards.greenhouse.io/epickids/jobs/6669024003
- Node + TS + AWS Developer - SR.
- Database Engineer - SSR/SR.
- Fullstack Engineer: Python + TS + AI - SR.
- Data Engineer w/ Snowflake - SR.
- Node.js Developer - SR.
Apply here: getonbrd.com/companies/improving - or follow us on LinkedIn
Narrative has been building a data collaboration platform designed for simplicity and ease of use since being founded in 2016.
Our primary strength is functioning as a data marketplace where we differentiate ourselves by automatically standardizing data, making platform data accessible through the Narrative Query Language (NQL), giving data providers the ability to define row-level access and pricing policies, and making it easy to deliver data to a variety of destinations using our "Connector Framework".
We operate two flavours of our platform: An AWS-based implementation that runs on our infrastructure, and a Snowflake-based version running inside the user's Snowflake account.
We are a small, remote-first team looking for great developers who want to jump in and take major systems and user-facing features from design to launch. While the company's headquarters are in NYC, the development team currently includes engineers working from the US (California and New York), Canada (Alberta, British Columbia, and Québec), Poland, and Serbia.
In brief, the technologies we use are:
- Backend: Scala, Spark, Apache Iceberg, Apache Calcite, Cats, Cats-Effect, Http4s, FS2, Doobie, Deequ, Axolotl, BentoML, and HuggingFace Transformers.
- Frontend: Typescript, VueJS, Nuxt, Vite, and Cloudflare Pages.
- Operations: AWS (ECR, ECS, EMR, RDS, S3, etc.), Datadog, Docker, Terraform, with some burgeoning use of EKS/Kubernetes.
Job postings and more on information about our team and culture are available at: https://jobs.narrative.io/
Apply by sending your resume to hiring-dev@narrative.io.