Data Engineer at 3Pillar Global

We are redirecting you to the source. If you are not redirected in 3 seconds, please click here.

Data Engineer 3Pillar Global. . We are 3PILLAR GLOBAL. We build breakthrough software products that power digital businesses. We are an innovative product development partner whose solutions drive rapid revenue, market share, and customer growth for industry leaders in Software and SaaS, Media and Publishing, Information Services, and Retail. Our key differentiator is our Product Mindset. Our development teams focus on building for outcomes and all of our team members around the globe are trained on the Product Mindset’s core values – Minimize Time to Value, Solve For Need, and Excel at Change. Our teams apply this mindset to build digital products that are customer-facing and revenue-generating. Our business-minded approach to agile development ensures that we align to client goals from the earliest conceptual stages through market launch and beyond.. In 2023, 3Pillar Global India was named as a “Great Place to Work” for the fifth year in a row based on how our employees feel about our company, collaborative culture, and work/life balance - come join our growing team. Job Description:. . We are looking for a Data Engineer with 4 to 9 years of experience to be part of the product engineering team. . . Responsibilities. . . Understanding the business requirements. Writing code and implementing the proposed solutions. Creating data pipelines, versioning, and change management. Manage the complexity inherent in versioned data pipelines. Building Python-based ETL processes. Logging and instrumentation of pipelines and services. Ensuring data quality. Strong debugging skills to analyze and fix a defect in the data process. Making suggestions. Very good exposure to working on Data Lakes and data Warehouses. Should be a good team played and a great collaborator. . Requirements. . . Must have:. Strong Core Python skills. Strong exposure to write ETL / pipeline code. Exposure to work with Spark, writing PySpark code. Experience in Data collection from multiple sources and integrating based on business needs. Python-based Workflows and SDK. Panda, NumPy, and other data-related libraries. AWS S3, lambda, EMR, Glue, Athena, API Gateway, DynamoDB, RDS, SQL and NoSQL Databases, CloudWatch. Strong Linux experience. Understanding of MDM. Nice to have. Exposure to AI / ML. Knowledge of DataBricks, Snowflake, BigQuery. Multi cloud data services exposure. Exposure to Tableau, PowerBI. Experience in Java or Scala. Experience with Talend. . Benefits. . . A competitive annual salary based on experience and market demands . Flexi-timings . Work From Anywhere. Medical insurance with the option to purchase a premium plan or HSA option for your entire family . Regular Health check-up camps arranged by the company . Recreational activities (Pool, TT, Wii, PS2) . Business casual atmosphere. #LI-Remote. #LI-DV. .