
Staff Engineer - Data Platform and Lakehouse (Remote - US) at Jobgether. This position is posted by Jobgether on behalf of Curinos. We are currently looking for a Staff Engineer – Data Platform and Lakehouse in the United States.. This role provides a senior engineering opportunity to design and lead the implementation of a cloud-native data platform that powers advanced analytics, AI, and machine learning applications. The Staff Engineer will work closely with cross-functional teams, including product managers, engineers, and data scientists, to enable scalable, secure, and high-performance data solutions. The role requires balancing strategic architectural leadership with hands-on engineering, optimizing distributed data systems, and supporting the development of next-generation AI and SaaS products. Candidates will influence platform adoption, data governance, and automation standards while helping the organization achieve high reliability, scalability, and operational efficiency across its data ecosystem.. Accountabilities. · Design, implement, and maintain scalable, secure, and maintainable data platforms on Databricks and cloud infrastructure (AWS).. · Provide architectural leadership and ensure consistency, resilience, and performance across distributed data processing systems.. · Develop reusable data pipelines, workflows, and ETL/ELT processes using Databricks Workflows, Airflow, or AWS Glue.. · Translate business objectives into technical platform capabilities in collaboration with product and cross-functional teams.. · Support AI/ML initiatives, including feature engineering, model deployment, and real-time data processing.. · Drive adoption of data governance standards, including access control, metadata management, lineage, and compliance.. · Establish CI/CD pipelines and DevOps automation for data infrastructure.. · Evaluate and integrate emerging technologies to enhance development, testing, deployment, and monitoring practices.. · 15+ years of experience in software development, covering the full SDLC from design to deployment and support.. · Proven ability to design and implement cloud-native data architectures on Databricks and AWS (Azure or GCP experience a plus).. · Deep expertise in Apache Spark, including performance tuning and distributed computing best practices.. · Advanced proficiency in Python and SQL, with solid software engineering foundations.. · Hands-on experience with Databricks Unity Catalog, Feature Store, Delta Live Tables, and data pipeline orchestration tools.. · Strong understanding of ETL/ELT design, data quality validation, observability, and monitoring practices. Experience with Monte Carlo preferred.. · Experience supporting AI/ML workloads and SaaS product integrations.. · Strong communication and collaboration skills for working with engineers, product managers, and data scientists.. · Knowledge of data governance, security, compliance, and metadata management best practices.. · Strategic mindset with the ability to align technical decisions with business goals.. Company Location: United States.