Senior Engineer, Data (Remote, US) at Renew Home

We are redirecting you to the source. If you are not redirected in 3 seconds, please click here.

Senior Engineer, Data (Remote, US) at Renew Home. Who We Are. Renew Home is on a mission to change how we power the world by making it easier for customers to save energy and money at home as part of the largest residential virtual power plant in North America.. We partner with industry-leading brands to better manage residential energy for users by prioritizing efficiency, savings, and comfort — and cleaner energy for everyone.. We are an Equal Opportunity employer striving to create a diverse, equitable, and inclusive work environment where everyone feels that they have a voice that is heard.. We strongly encourage candidates to check out our website at . www.renewhome.com.   to learn more about the world-changing work we are doing.. What You Will Do. Architect and deploy secure, scalable, and highly available batch and real-time data pipelines.  Implement and optimize data lake architectures for structured and unstructured data from millions of connected devices.. Work closely with development teams to integrate data engineering services into the broader system architecture.  Collaborate with cross-functional teams consisting of engineers, data scientists, and analysts to deliver clean, reliable data.. Analyze and enhance the performance of PostgreSQL Aurora and Redshift databases through query tuning, indexing & partitioning strategies, and efficient resource allocation.. Maintain system performance, data integrity, and uptime. Manage and participate in on-call rotations and ensure strong operational standards.. Contribute to the design and evolution of our data architecture to support growing business needs.. Work with tools and platforms such as Python, Redshift, Postgres, AWS/GCP, AWS Lambda, Kinesis, Prefect (or Airflow), Redis, Git, and Terraform.. Participate in our agile development process, including regular team updates, stand-up meetings, and one-on-ones.. 5-10+ years of industry experience.. Bachelor's or Master's degree in computer science or equivalent experience in the software industry.. Self-starter who takes initiative to identify improvement areas, rigorously tests potential solutions, and proposes actionable enhancements to drive operational success.. Proficiency in Python and SQL, plus solid software engineering fundamentals.. Hands-on experience building scalable batch and real-time data pipelines using structured and unstructured data. Experience with orchestration tools like Prefect Airflow, Dagster etc.. Experience with streaming technologies like Apache Kafka, AWS Kinesis, Apache Flink, or GCP Pub/Sub.. Strong knowledge of data lake architectures and technologies (e.g., AWS S3, Iceberg, AWS Glue, Delta Lake, or similar).. Proven ability to analyze and optimize database performance, including query tuning, partitioning, indexing strategies, and resource allocation with extensive hands-on experience using Redshift and Postgres.. Proficiency in using CDK and Terraform for automating infrastructure deployment and management.. Ability to work collaboratively with development teams, providing guidance and mentorship on data infrastructure-related issues and best practices.. Commitment to staying up-to-date with the latest advancements in cloud infrastructure and database technologies, and continuously improving processes and systems.. Bonuses:. Extensive experience in data warehousing best practices and familiarity with advanced Redshift features (e.g., Spectrum, workload management).. Exposure to machine learning pipelines or big data frameworks like Apache Spark or Hadoop.. Contributions to open-source data projects or relevant certifications (e.g., AWS Certified Data Analytics, GCP Professional Data Engineer).. Company Location: United States.