FBS Database Operations Engineer at Capgemini

We are redirecting you to the source. If you are not redirected in 3 seconds, please click here.

FBS Database Operations Engineer at Capgemini. FBS – Farmer Business Services is part of Farmers operations with the purpose of building a global approach to identifying, recruiting, hiring, and retaining top talent. By combining international reach with US expertise, we build diverse and high-performing teams that are equipped to thrive in today’s competitive marketplace.. We believe that the foundation of every successful business lies in having the right people with the right skills. That is where we come in—helping Farmers build a winning team that delivers consistent and sustainable results.. Since we don’t have a local legal entity, we’ve partnered with Capgemini, which acts as the Employer of Record. Capgemini is responsible for managing local payroll and benefits.. What to expect on your journey with us:. A solid and innovative company with a strong market presence. A dynamic, diverse, and multicultural work environment. Leaders with deep market knowledge and strategic vision. Continuous learning and development. Summary:. The Database Operations (DBOps) Engineer ensures the availability, reliability, scalability and security of enterprise databases across Oracle, DB2, MSSQL, Snowflake, PostgreSQL, and MySQL. The DBOps Engineer is responsible for the design, deployment, automation, and operational reliability of all production database systems across multi-cloud environments (AWS, Azure, GCP). This role focuses on implementing Infrastructure as Code (IaC) for databases, performance for critical data services, operational excellence—monitoring, maintenance, autonomous reliability of high-performance data platforms and incident/change management.. Responsibilities:. Design and implement cloud-native database infrastructure using Terraform /Ansible to provision managed DB instances in multi-clouds (RDS/Azure DB /Cloud SQL) and self-managed clusters. Automate Configuration Management, security hardening, and patching of database instances across all environments. Automate workflows to reduce manual effort and improve reliability.. Develop internal tools and scripts (Python/Bash) to enable production support teams to manage their own database instances and environments safely.. Integrate advanced observability platforms (Dynatrace, CloudWatch) with AIOps tools to establish SLOs and train models for anomaly detection and proactive forecasting of database degradation like predicting slow queries or imminent connection pool exhaustion).. Design, deploy, and govern AI-powered agents (using Azure Copilot /AWS Bedrock) to achieve autonomous self-healing capabilities and automated resource management.. Implement advanced monitoring (CloudWatch, Dynatrace) for key database metrics (SLIs/SLOs) like latency, throughput, error rates, and connection pools. Develop and train predictive ML models to analyze historical telemetry and forecast potential system outages or performance bottlenecks and configure proactive monitoring and alerting for critical services.. Execute backup strategies and validate recovery procedures using Rubrik and Perform restores as needed. Work closely with application operations / Production support teams to troubleshoot issues on database layer (performance, locks, schema) and the platform layer (multi-cloud /middleware /network, resource limits) to find the root causes. Lead incident response and root cause analysis (RCA) for database outages, performance degradations, and data integrity issues. Collaborate with DBAs and application teams for root cause analysis.. Implement AI tools to perform real-time Root Cause Analysis (RCA), correlate complex event data (logs, metrics) and auto-generate runbooks . Define and automate scaling strategies (read replicas, sharding, auto-scaling) based on predicted load and business growth. Provide input for capacity planning and resource optimization.. Implement cost management policies, including rightsizing instances, managing storage tiers, and defining lifecycle rules for backups and snapshots.. Support DBA teams in performance tuning initiatives.. Implement robust secrets management solutions (AWS Secrets Manager, HashiCorp Vault) for database credentials, ensuring applications retrieve secrets securely at runtime.. Ensure database environments meet regulatory requirements (PCI, HIPAA, GDPR) through encryption-at-rest and in-transit, audit logging, and automated compliance checks.. Define and enforce least-privilege access policies (IAM roles, service accounts) for databases. • Implement encryption and data masking policies as directed. . Manage security and compliance by utilizing AI agents to detect configuration drift and auto-generate compliant updates for IAM, network, and security policies.. Tools & Skills:. 5+ years of experience in Oracle / DB2 /MSSQL/Snowflake/PostgreSQL and MySQL administration, with a strong focus on AIOps integration. - Advanced. 4+ years of experience in public cloud operations (AWS, Azure, GCP). - Advanced. Deep, demonstrable expertise designing and operationalizing solutions leveraging AWS Bedrock/Agent Frameworks and Azure Copilot for DB Operations. - Advanced. Expertise in Infrastructure as Code (Terraform, CloudFormation), Ansible, and CI/CD pipelines, including supervising AI-generated infrastructure artifacts. – Advanced. Expertise integrating observability platforms into AI/ML platforms for predictive analysis and anomaly detection. - Advanced . Hands-On experience on Informatica PowerCenter / PowerBI /Cognos /Sapiens /Alteryx/IDMC/ILM/SAS / BusinessObjects / Glue / SPSS /ODI - PLUS. Proficiency in scripting languages (Python or Bash) - Advanced. Company Location: Mexico.