Data and AI Engineer (EMEA) at Jobgether

We are redirecting you to the source. If you are not redirected in 3 seconds, please click here.

Data and AI Engineer (EMEA) at Jobgether. This position is posted by Jobgether on behalf of a partner company. We are currently looking for a . Data and AI Engineer. in . EMEA. .. As a Data and AI Engineer, you will design, implement, and optimize advanced data pipelines and backend integrations to support next-generation cybersecurity and data-intensive platforms. You will work on scalable ingestion, enrichment, and annotation workflows while enabling AI-driven features such as natural language querying and intelligent analytics. This role involves close collaboration with cross-functional teams to deliver secure, high-performance, and reliable solutions. Your work will directly impact the architecture, efficiency, and intelligence of complex systems, bridging data engineering, applied AI, and cybersecurity in innovative ways. You will also have opportunities to shape cloud and on-premises deployments and influence the adoption of AI technologies across the organization.. . Accountabilities. Design and build high-performance data pipelines for ingestion, transformation, and enrichment of large datasets.. Implement workflows for data correlation, annotation, and contextual enrichment to support analytics and AI features.. Develop and maintain database schemas, optimizing storage strategies for performance and scalability.. Integrate AI/ML models into data workflows, including RAG pipelines and embeddings for advanced analytics.. Ensure reliability and scalability of pipelines and services, including monitoring, error handling, and performance tuning.. Collaborate with DevOps, product, and security teams to deploy, document, and transfer knowledge of solutions.. . BSc/MSc in Computer Science, Data Engineering, or a related field, or equivalent practical experience.. 5+ years of experience in data engineering or big data analytics, with exposure to AI/ML integration.. Proficiency in Python (Pandas, PySpark, FastAPI) and familiarity with Java/Scala for Spark workflows.. Experience designing data pipelines, modeling schemas, and managing large-scale datasets.. Hands-on experience with big data technologies such as Spark, Iceberg, Hive, Presto/Trino, and Superset.. Strong SQL skills and experience with at least one NoSQL or distributed storage solution.. Practical experience building and deploying APIs and services in cloud or on-prem environments.. Strong problem-solving, debugging, and communication skills; proficient in English.. Nice-to-have: experience with RAG pipelines, LLM applications, GPU acceleration, containerization (Docker/Kubernetes), and cybersecurity concepts.. . Company Location: Poland.