Project details
Recommended projects
Data Engineer (m/f/d)
Freelance Electrical Engineer with Python Experience (m/w/d)
Fullstack Engineer (m/f/d)
AI Consultant for Vibe Coding (m/w/d)
Development of TM1 Planning Analytics and Interfaces (m/f/d)
Chemist with Python Experience (m/w/d)
Mathematician with Python Experience (m/w/d)
AI Consultants - Data Science (m/w/d)
Biologist with Python Experience (m/w/d)
Freelance Mechanical Engineer with Python Experience (m/w/d)
Physicist with Python Experience (m/w/d)
Senior Web Developer (m/f/d)
Senior Cloud Developer TypeScript (m/f/d)
AI Consultant - Machine Learning (m/w/d)
CRM Manager (m/w/d)
Fullstack Data Platform Developer & Architect (m/f/d)
Freelance Cybersecurity Consultant for AI Red Teaming
Java IT Architect (m/f/d)
Freelance Automotive Engineer (with Python) - Quality Assurance / AI Trainer
AI Evaluation Consultant (m/w/d)
Cyber Security Consultant – Product Security & Regulatory Compliance (m/f/d)
Safety and Health Protection Coordinator (SiGeKo) and Safety Specialist (SiFa) (m/f/d)
Control Systems Technician / Control Systems Specialist (m/f/d)
Tax Strategy Consulting
IT project manager ISO 27001 - Gap Closure (m/f/d)
Freelance Product Owner for Point of Sale App
Infor AS Consultant (m/f/d)
Evaluation Scenario Writer (m/w/d)
Commissioning & Qualification (C&Q) Engineer (m/f/d)
Quality Compliance Auditor (GCP/GCLP/GVP) (M/W/D)
Senior Regulatory Compliance Expert (FDA inspection preparation) (m/f/d)
Frontend developer to HR platform with Angular experience
Data Engineer (m/f/d)
Project info
- Capacityfrom 95%
- Daily rate800€
- LocationMunich, Germany
- Language
- English(Advanced)
- English
- Remotefrom 95%
Description
A company is looking for an experienced Data Engineer to carry out a migration from Snowflake to ClickHouse. The focus is on using Apache Spark for data processing and on managing and optimizing Kubernetes environments. The goal is to build and operate a powerful and scalable data platform.
- Executing the migration from Snowflake to ClickHouse
- Developing and optimizing data pipelines with Apache Spark
- Managing and optimizing Kubernetes clusters
- Ensuring the performance and scalability of the data platform
- Implementing solutions in Python
- Optional: Working with Snowplow for data analytics
Requirements
- Solid experience with Kubernetes and Apache Spark
- Knowledge of ClickHouse and Snowflake
- Very good Python skills
- Experience with data platform migrations
- Optional: Experience with Snowplow
- Fluent English skills