
Data Quality Engineer (Remote - US) at Jobgether. This position is posted by Jobgether on behalf of a partner company. We are currently looking for a Data Quality Engineer in the United States.. We are seeking a meticulous and technically skilled Data Quality Engineer to ensure the integrity and reliability of organizational data. In this role, you will design, implement, and maintain automated data quality checks across diverse data assets, supporting both internal teams and customer-facing operations. You will collaborate with Data Engineering, Product, and Business teams to identify requirements, troubleshoot issues, and optimize data pipelines. The position offers the opportunity to shape the organization’s data quality framework, create actionable insights, and contribute to a culture of data-driven decision-making. This is a hands-on role where your work directly impacts the accuracy, consistency, and usability of critical data.. . Accountabilities. . Design, develop, document, and maintain a robust data quality framework.. . Build repeatable and automated data quality checks to validate data across pipelines and repositories.. . Execute test plans and test cases, perform bug tracking, and share results with stakeholders.. . Troubleshoot, tune, and optimize data processes for improved performance and reliability.. . Work closely with cross-functional teams to gather requirements and communicate technical concepts.. . Assist with data quality support tickets and inquiries, ensuring timely resolution.. . Design dashboards and reports to analyze and communicate the outcomes of data quality tests.. . Continuously refine and improve data quality processes and tools.. . . . Bachelor’s degree in Computer Science, Information Systems, or equivalent experience (5+ years).. . Strong analytical and critical thinking skills for solving complex data issues.. . Proficient in programming and automation (Python, Scala, Shell scripting).. . Excellent SQL skills and experience with Hive (HQL) and HDFS.. . Experience working with structured and unstructured data, including JSON, XML, Parquet, AVRO, and flat files.. . Familiarity with Spark/PySpark and data processing frameworks.. . Attention to detail and ability to consistently meet deadlines.. . Strong communication and interpersonal skills, capable of working independently and collaboratively.. . Experience in troubleshooting, performance tuning, and optimization of data systems.. . . Company Location: United States.