Senior Engineer, Data Management (Remote) at Jobgether

We are redirecting you to the source. If you are not redirected in 3 seconds, please click here.

Senior Engineer, Data Management (Remote) at Jobgether. This position is posted by Jobgether on behalf of a partner company. We are currently looking for a . Senior Engineer, Data Management. in the . United States. .. As a Senior Engineer, Data Management, you will lead the design, development, and optimization of large-scale data pipelines and analytics frameworks. You will work with cross-functional teams to ensure high-quality, reliable data flows and operationalize AI/ML solutions at scale. This role combines hands-on engineering with strategic oversight, emphasizing metadata-driven pipelines, automated quality frameworks, and continuous process improvement. You will mentor team members, establish best practices, and help drive a culture of data excellence. The position is fast-paced, collaborative, and ideal for engineers who thrive in complex cloud environments and enjoy shaping enterprise data strategies.. Accountabilities:. In this role, you will be responsible for:. Designing, building, and automating data pipelines using Azure Data Factory, Databricks/Spark, Snowflake, and Kafka.. Developing and maintaining metadata-driven frameworks to promote self-service and streamline data ingestion and transformation.. Implementing data quality, monitoring, and alerting frameworks to ensure accurate and reliable datasets.. Conducting complex data analysis to support business decisions and provide insights to stakeholders.. Collaborating with engineering, architecture, and business teams to define standards, design patterns, CI/CD automation, and DevOps best practices.. Documenting data models, data flows, technical mappings, and production support information.. Leading and mentoring team members while evangelizing coding standards, design patterns, and enterprise best practices.. Responding to SLA-driven production issues and continuously evaluating opportunities for process automation.. The ideal candidate will have:. 5+ years of experience in enterprise data management or data engineering roles.. Hands-on expertise with metadata-driven data pipelines and cloud data platforms such as Azure Data Factory, Databricks/Spark, and Snowflake.. Strong proficiency in Python, PySpark, SQL, and Jupyter Notebooks for data analysis and transformation.. Experience with DevOps practices and tools such as Azure DevOps or GitLab in multi-developer environments.. Knowledge of enterprise ETL platforms (e.g., IBM Datastage, Informatica, Pentaho, Ab Initio) and reporting tools like PowerBI, Tableau, or OBIEE.. Familiarity with infrastructure automation and cloud tools such as Terraform, Azure CLI, PowerShell, Kubernetes, and Docker.. Strong communication and collaboration skills with the ability to explain technical concepts to stakeholders.. Bachelor’s degree in Computer Science, Engineering, Mathematics, or a related field.. Personal attributes: self-starter, curious, highly motivated, collaborative, team-oriented, and adaptable to fast-paced Agile environments.. Company Location: United States.