Big Data Engineer at Jobgether

We are redirecting you to the source. If you are not redirected in 3 seconds, please click here.

Big Data Engineer at Jobgether. This position is posted by Jobgether on behalf of SynergisticIT. We are currently looking for a Big Data Engineer in South Carolina.. This role is ideal for technically skilled professionals eager to work with large-scale data solutions and drive meaningful insights across diverse industries. You will design, implement, and optimize data pipelines, manage data storage solutions, and collaborate with cross-functional teams to deliver high-quality, scalable solutions. The position emphasizes hands-on experience with modern big data tools, cloud platforms, and data analytics frameworks, providing exposure to cutting-edge technologies. You will also contribute to the development of best practices for data processing, ETL, and analytics, while enhancing the overall data strategy. This environment encourages continuous learning, problem-solving, and practical application of technical skills to real-world projects. Strong communication, adaptability, and initiative are key to thriving in this role.. . Accountabilities. ·         Design, build, and maintain scalable data pipelines and ETL processes.. ·         Integrate and optimize data storage solutions using relational and non-relational databases.. ·         Collaborate with data scientists, analysts, and engineers to ensure data availability and quality.. ·         Monitor, troubleshoot, and improve data processing performance and reliability.. ·         Implement data governance, security, and best practices for data management.. ·         Develop dashboards, reports, and visualizations to support decision-making processes.. ·         Stay current with emerging big data technologies and recommend improvements to existing systems.. ·         Bachelor’s degree in Computer Science, Data Engineering, or a related field, or equivalent experience.. ·         Strong programming skills in Java, Python, or Scala.. ·         Experience with big data technologies such as Hadoop, Spark, Kafka, or Flink.. ·         Knowledge of cloud platforms (AWS, Azure, GCP) and data storage solutions.. ·         Proficiency in SQL and data modeling.. ·         Analytical thinking, problem-solving, and attention to detail.. ·         Strong communication skills for collaboration across teams.. ·         Bonus: Experience with machine learning pipelines, NoSQL databases, or containerization (Docker/Kubernetes).. . Company Location: United States.