Utilization: 100%
Work Modell: Hybrid
Start date: Aug 2025
End date: Dec 2025
Expected years of experience: Senior 4-7 years
Project:
On the behalf of our client we are looking for a Data Enginner for a project where they are building an Observability platform, enabling our product and platform teams to get insight into the availability an reliability of our digital products.
We are embarking on a journey to add additional analytical and digital insights capability to our platform to further strengthen the existing capabilities in predictive analytics, insights and advanced event management.
The scope of the consultant services is to assist in:
- Correlate Observability Signals:
- Integrate and analyze observability data from various sources (logs, metrics, traces) with business context, using Grafana and Prometheus as primary monitoring and visualization tools.
- Enrich Data:
- Develop pipelines to enrich technical signals with business data, leveraging OpenTelemetry (OTel) for instrumentation and observability.
- Machine Learning Integration:
- Design, implement, and deploy machine learning models for anomaly detection and forecasting.
- Drive new ways of working with ML models to reduce noise and increase the value derived from incidents.
- Continuously improve ML models to enhance incident management and business insights.
- Reduce Interruptions:
- Identify patterns and trends that predict potential business disruptions and help mitigate risks.
- Increase Actions Taken:
- Enable automated or manual actions by making observability insights accessible and actionable to business stakeholders.
- Collaborate:
- Work closely with IT, DevOps, and business teams to ensure data-driven solutions meet both technical and business needs.
- Continuous Improvement:
- Monitor and improve data pipelines for accuracy, reliability, and performance.
Requirements:
Proven experience as a Data Engineer or in a similar role, with hands-on experience in data integration and pipeline development.
- Proficiency in SQL and Python
- Experience with Google Cloud Platform (GCP)
- Hands-on experience with BigQuery as a business intelligence/analytics tool
- Expertise in Grafana and Splunk as primary observability tools
- Strong familiarity with OpenTelemetry (OTel) for observability instrumentation
- Knowledge of big data technologies (e.g., Spark, Kafka, Airflow)
- Experience with machine learning for anomaly detection and forecasting
- Ability to drive new ways of working with ML models to reduce noise and increase value in incident management
- Ability to analyze complex datasets and extract meaningful insights.
Personality traits:
- Strong team player with a heavy desire to collaborate
- Agile mindset
We present regularly. This means some assignments may be removed from our website before the official application deadline. If you’re interested in a position, we encourage you to apply as soon as possible.