
Senior Data Engineer at HighLevel. Location Information: Delhi. About HighLevel:HighLevel is an AI powered, all-in-one white-label sales & marketing platform that empowers agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. We are proud to support a global and growing community of over 2 million businesses, comprised of agencies, consultants, and businesses of all sizes and industries. HighLevel empowers users with all the tools needed to capture, nurture, and close new leads into repeat customers. As of mid 2025, HighLevel processes over 15 billion API hits and handles more than 2.5 billion message events every day. Our platform manages over 470 terabytes of data distributed across five databases, operates with a network of over 250 microservices, and supports over 1 million domain names.. Our PeopleWith over 1,500 team members across 15+ countries, we operate in a global, remote-first environment. We are building more than software; we are building a global community rooted in creativity, collaboration, and impact. We take pride in cultivating a culture where innovation thrives, ideas are celebrated, and people come first, no matter where they call home.. Our ImpactAs of mid 2025, our platform powers over 1.5 billion messages, helps generate over 200 million leads, and facilitates over 20 million conversations for the more than 2 million businesses we serve each month. Behind those numbers are real people growing their companies, connecting with customers, and making their mark - and we get to help make that happen.. About the Role:We are seeking a talented and motivated tech data engineer to join our team who will be responsible for designing, developing, and maintaining our data infrastructure and developing backend systems and solutions that support real-time data processing, large-scale event-driven architectures, and integrations with various data systems. This role involves collaborating with cross-functional teams to ensure data reliability, scalability, and performance. The candidate will work closely with data scientists, analysts and software engineers to ensure efficient data flow and storage, enabling data-driven decision-making across the organisation.. Requirements:. . . 4+ years of experience in software development. Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Strong Problem-Solving Skills: Ability to debug and optimize data processing workflows. Programming Fundamentals: Solid understanding of data structures, algorithms, and software design patterns. Software Engineering Experience: Demonstrated experience (SDE II/III level) in designing, developing, and delivering software solutions using modern languages and frameworks (Node.js, JavaScript, Python, TypeScript, SQL, Scala or Java). ETL. Tools & Frameworks: Experience with Airflow, dbt, Apache Spark, Kafka, Flink or similar technologies.. Cloud Platforms: Hands-on experience with GCP (Pub/Sub, Dataflow, Cloud Storage) or AWS (S3, Glue, Redshift). Databases & Warehousing: Strong experience with PostgreSQL, MySQL, Snowflake, and NoSQL databases (MongoDB, Firestore, ES). Version Control & CI/CD: Familiarity with Git, Jenkins, Docker, Kubernetes, and CI/CD . pipelines. for deployment. Communication: Excellent verbal and written communication skills, with the ability to work effectively in a collaborative environment. Experience with data visualization tools (e.g. Superset, Tableau), Terraform, IaC, ML/AI data pipelines and devops practices are a plus. . Responsibilities:. . . Software Engineering Excellence: Write clean, efficient, and maintainable code using JavaScript or Python while adhering to best practices and design patterns. Design, Build, and Maintain Systems: Develop robust software solutions and implement RESTful APIs that handle high volumes of data in real-time, leveraging message queues (Google Cloud Pub/Sub, Kafka, RabbitMQ) and event-driven architectures. Data Pipeline Development: Design, develop and maintain data pipelines (ETL/ELT) to process structured and unstructured data from various sources. Data Storage & Warehousing: Build and optimize databases, data lakes and data warehouses (e.g. Snowflake) for high-performance querying. Data Integration: Work with APIs, batch and streaming data sources to ingest and transform data. Performance Optimization: Optimize queries, indexing and partitioning for efficient data retrieval. Collaboration: Work with data analysts, data scientists, software developers and product teams to understand requirements and deliver scalable solutions. Monitoring & Debugging: Set up logging, monitoring, and alerting to ensure data pipelines run reliably. Ownership & Problem-Solving: Proactively identify issues or bottlenecks and propose innovative solutions to address them. EEO Statement:At HighLevel, we value diversity. In fact, we understand it makes our organisation stronger. We are committed to inclusive hiring/promotion practices that evaluate skill sets, abilities, and qualifications without regard to any characteristic unrelated to performing the job at the highest level. Our objective is to foster an environment where really talented employees from all walks of life can be their true and whole selves, cherished and welcomed for their differences while providing excellent service to our clients and learning from one another along the way! Reasonable accommodations may be made to enable individuals with disabilities to perform essential functions.. #LI-Remote. #NJ1.