Senior Engineer, Data (Remote, US) at Renew Home

We are redirecting you to the source. If you are not redirected in 3 seconds, please click here.

Senior Engineer, Data (Remote, US) at Renew Home. Who We Are. Renew Home is on a mission to change how we power the world by making it easier for customers to save energy and money at home as part of the largest residential virtual power plant in North America.. We partner with industry-leading brands to better manage residential energy for users by prioritizing efficiency, savings, and comfort — and cleaner energy for everyone.. We are an Equal Opportunity employer striving to create a diverse, equitable, and inclusive work environment where everyone feels that they have a voice that is heard.. We strongly encourage candidates to check out our website at . www.renewhome.com.   to learn more about the world-changing work we are doing.. What You Will Do. . Architect and deploy secure, scalable, and highly available batch and real-time data pipelines.  Implement and optimize data lake architectures for structured and unstructured data from millions of thermostats.. . Work closely with development teams to integrate data engineering services into the broader system architecture.  Collaborate with cross-functional teams consisting of engineers, data scientists, and analysts to deliver clean, reliable data.. . Analyze and enhance the performance of PostgreSQL Aurora and MySQL databases through query tuning, indexing strategies, and efficient resource allocation.. . Strive for 99.999% uptime SLA for the systems. Participate in on-call rotations, respond to application and data infrastructure incidents and provide detailed incident reports.  Ensure data quality, integrity, and compliance with best practices and governance standards.. . Contribute to the design and evolution of our data architecture to support growing business needs.. . Work on various aspects of our stack, including Python, MySQL, Postgres, AWS/GCP (CDK, ECS/EKS, RDS, Redshift, zeroETL, Prefect, S3 Tables, Athena, Iceberg, Flue, S3, SQS, SES, Pub/Sub, etc.),, Redis, Git, and Jira.. . Implement application monitoring tools and proactively monitor application performance.. . Participate in our agile development process, including regular team updates, stand-up meetings, and one-on-ones.. . . 5-10 years of industry experience.. . Bachelor's or Master's degree in computer science or equivalent experience in the software industry.. . Self-starter who takes initiative to identify improvement areas, rigorously tests potential solutions, and proposes actionable enhancements to drive operational success.. . Hands-on experience building scalable batch and real-time data pipelines using structured and unstructured data. Experience with orchestration tools like Prefect Airflow, Dagster etc.. . Experience with streaming technologies like Apache Kafka, AWS Kinesis, Apache Flink, or GCP Pub/Sub.. . Strong knowledge of data lake architectures and technologies (e.g., AWS S3, AWS Glue, Delta Lake, or similar).. . Proven ability to analyze and optimize database performance, including query tuning, indexing strategies, and resource allocation preferably with Redshift, Postgres.. . Proficiency in using CDK and Terraform for automating infrastructure deployment and management.. . Strong software engineering background and proficiency in one or more programming languages such as Python, Java, PHP, or Ruby.. . Ability to work collaboratively with development teams, providing guidance and mentorship on data infrastructure-related issues and best practices.. . Commitment to staying up-to-date with the latest advancements in cloud infrastructure and database technologies, and continuously improving processes and systems.. . Bonuses:. . . Knowledge of containerization and orchestration tools like Docker and Kubernetes.. . Experience with Prefect . . . . Familiarity with data warehousing best practices and advanced Redshift features (e.g., Spectrum, workload management).. . Exposure to machine learning pipelines or big data frameworks like Apache Spark or Hadoop.. . Contributions to open-source data projects or relevant certifications (e.g., AWS Certified Data Analytics, GCP Professional Data Engineer).. . . Company Location: United States.