Jorge Machado
Data Architect
Experience
Data Architect
Deutsche Bahn
- Design and provide best practices on data modeling for dbt, including changing dimensions, late arriving data handling, and testing
- Design the ingestion flow from other systems into S3 and Redshift
- Design and implement new partitions for Dagster and incremental loading with dbt
- Map business requirements to technical architectures
- Instruct junior team members
Data Architect Expert
SAP AG
- Led the architectural design and implementation of Kafka Tiered Storage rollout across 30+ Kubernetes clusters in multi-cloud environments (Azure, AWS, GCP)
- Defined and implemented infrastructure provisioning using Crossplane for declarative and consistent deployment across cloud providers
- Developed a custom Golang-based Kafka Operator to standardize tiered storage activation for data pipelines
- Designed and automated GitOps-based deployment strategies using Flux and Helm for safe and repeatable rollouts
- Optimized Gardener shoot configurations to align cluster resources with Kafka workload and cost efficiency requirements
Data Architect Expert
s.Oliver GmbH
- Designed a medallion architecture on Databricks for scalable, modular data ingestion, transformation, and consumption
- Implemented incremental ETL pipelines using PySpark to efficiently extract and process SAP data
- Architected and implemented dbt-based semantic layers with dimensional modeling for fact and dimension tables
- Established Dev-to-Prod CI/CD pipelines to standardize deployment and enforce governance
- Defined role-based access control and security concepts aligned with enterprise Azure standards
- Enabled real-time data integration by connecting Kafka streams to Databricks for enriched analytics
- Introduced AI/ML use cases, including FP-Growth for basket analysis and time series forecasting models
- Mentored junior developers on Databricks best practices to ensure long-term platform adoption
Data Architect Expert
ias Gruppe
- Architected an end-to-end Azure Data Lakehouse solution leveraging Azure Synapse, Delta Lake, and Azure Data Lake Storage Gen2 for scalable storage and query performance
- Designed and implemented streaming ingestion pipelines using Azure IoT Hub, Event Hub, and Service Bus for real-time telemetry data capture from thousands of IoT devices
- Developed data integration and transformation flows using Airbyte for ELT and dbt for business logic modeling, dimensional design, and lineage tracking
- Orchestrated complex data workflows using Azure Data Factory, integrating batch and streaming processes
- Implemented Delta Lake-based time travel and ACID transactions for data reliability and traceability
- Designed RBAC, resource tagging strategies, and monitoring with Azure Monitor and Log Analytics for operational transparency and security
- Enabled Power BI integration for near real-time business dashboards and collaborated with product and operations teams on requirements translation
Data Architect Expert
Deutsche Bahn
- Designed and implemented real-time streaming architectures using AWS Kinesis, Lambda, and Apache Spark for time-sensitive analytics use cases
- Architected delta ingestion pipelines on AWS Glue and Apache Hudi for efficient small-file compaction and time travel analytics
- Delivered business-critical KPIs and dashboards with end-to-end data lineage and auditability across S3, PostgreSQL, and CloudWatch
- Defined and enforced infrastructure-as-code principles using AWS CDK for scalable, replicable environments
- Introduced and rolled out dbt for semantic modeling and reusable business logic integrated into GitLab CI/CD workflows
- Conducted architectural evaluations of Databricks, Snowflake, and AWS Athena to support future platform strategy decisions
- Mentored a team of developers, optimizing development cycles and ensuring cloud data engineering best practices
- Implemented IoT 4.0 pipelines for ingesting telemetry data and supporting predictive analytics initiatives
Kafka Expert
S.Oliver GmbH
- Developed Spring Boot Kafka Streams applications
- Created custom Kafka source connectors for SAP systems and custom sink connectors to write back to SAP
- Deployed Kafka Connect connectors with monitoring on Azure Kubernetes Service
- Developed data pipelines using Airflow and Azure Cloud
- Architected data pipelines between on-premise and Azure Cloud
- Wrote Spark jobs to clean and aggregate data
Software Developer
RTL Deutschland
- Designed and implemented a Lakehouse architecture combining Azure Databricks, Delta Lake, and Azure Synapse for batch and real-time workloads with ACID compliance
- Built RESTful data APIs using FastAPI and deployed them via Azure App Services as a controlled access layer
- Developed incremental ETL pipelines using PySpark and dbt, implementing star schema models for semantic consistency and historical tracking
- Enabled interactive reporting and visual analytics using Power BI integrated into Azure
- Implemented strong data access controls, audit logging, and resource monitoring for GDPR compliance and governance
- Established automated CI/CD pipelines for data infrastructure using Azure-native tooling
Cloud Solution Architect
Allianz Technology
- Migrated data lakes into Azure Cloud with high automation using ArgoCD, Jenkins, Helm charts, and Terraform
- Developed Spark jobs for data lake migration
- Created Helm charts for Azure AKS automation
- Refactored application designs to be cloud-native and onboarded internal customers to Azure
- Implemented Spring Boot Kafka Streams applications and Argo workflow pipelines
Big Data Architect, Data Architect
BMW AG
- Developed data pipelines using Spark and Airflow for self-driving car data
- Generated metrics for geospatial applications
- Ingested data into Elasticsearch using Apache Spark
- Applied functional programming principles with Scala
Big Data Developer
DXC
- Automated Azure Kubernetes cluster deployments
- Created and deployed deep learning Spark jobs with PyTorch and GPUs on Kubernetes
- Performed GPU inferencing on terabytes of data
Big Data Developer, Spark / Kafka Developer, Data Architect
GfK
- Wrote Kafka Connectors to ingest data into Accumulo in a Kerberized environment
- Kerberized applications for Hadoop, Kafka, and Kafka Connect
- Created statistic plans for RDF4J queries over Accumulo
- Developed Apache NiFi workflows
- Introduced Git flow, CI/CD, and Docker automation
- Set up Kafka Connect with Kerberos on Google Kubernetes
- Wrote Java applications based on RDF and web semantics
Big Data Architect
Deutsche Bahn
- Sized and configured Hadoop clusters with Kerberos and Active Directory
- Migrated data using Sqoop and managed workflows with Oozie
- Implemented data pipelines using Kylo, Apache NiFi, and Talend
- Deployed Hortonworks Cloud Break on AWS and Apache Storm streaming applications
- Supported internal clients with streaming and data cleaning operations
Big Data Developer and Architect
Kiwigrid
- Created Spark jobs for historical data reporting
- Developed custom Spark data sources for HBase and aggregation for data exploration
- Architected an alerting and computing framework based on Spark Streaming
- Deployed applications using Docker
Industries Experience
See where this freelancer has spent most of their professional time. Longer bars indicate deeper hands-on experience, while shorter ones reflect targeted or project-based work.
Experienced in Transportation (3 years), Information Technology (2 years), Fashion (1.5 years), Retail (1.5 years), Media and Entertainment (1.5 years), and Insurance (1 year).
Business Areas Experience
The graph below provides a cumulative view of the freelancer's experience across multiple business areas, calculated from completed and active engagements. It highlights the areas where the freelancer has most frequently contributed to planning, execution, and delivery of business outcomes.
Experienced in Information Technology (7.5 years), Business Intelligence (5.5 years), and Product Development (0.5 years).
Skills
General Skills:
- Apache Spark
- Java Mapreduce
- Scala
- Java
- Python
- Perl
- Tornado
- Rest Apis
- Jira
- Etl
- Docker
- Maven
- Gradle
- Kubernetes
- Jenkins
- Cloud Build
- Azure Cosmos Db
- S3
- Neo4j
- Azure Kubernetes Service
- Aks
- Flask
- Spring Boot
- Data Vault 2.0
- Pytorch
- Tensorflow
- Azure Iot
- Modbus
- Mqtt
- Opc
- Plc
- Azure Data Factory
- Azure Synapse
- Llm
Operating System Skills:
- Aix
- Ubuntu
- Cento Os
- Mac Osx
- Windows Server 2008 R2
- Flexframe
- Routing
- Git
- Ibm Hadr
- Ibm Tsm
- Aws S3
- Apache Mesos
Sap Skills:
- Rfc
- Snc
- Charm
- Kernel Upgrades
- Ehp Upgrade
- Ssfs
- Sso
- Hana
Databases:
- Oracle 11
- Db2
- Sap Max Db
- Mysql
- Aws Redshift
- Postgres
Cloud Technologies:
- Aws Emr
- Aws Glue
- Aws Ecs
- Aws S3
- Google App Engine
- Azure Kubernetes
- Azure Containers
Languages
Certifications & licenses
Databricks Lakehouse Platform Accreditation
Confluent Certified Developer For Apache Kafka
Generative AI With Large Language Models (LLM)
CKAD: Certified Kubernetes Application Developer
Microsoft Certified: Azure Fundamentals
Data Engineering Nanodegree
Functional Programming Principles In Scala On Coursera
Big Data Analytics Fraunhofer IAIS
Big Data Analytics By University Of California, San Diego On Coursera
Databricks Developer Training For Apache Spark
Hadoop Platform And Application Framework By University Of California On Coursera
Machine Learning With Big Data By University Of California, San Diego On Coursera
SAP Os And Db Migration (Tadm70)
SAP Database Administration I (Oracle) (Adm 505)
SAP Database Administration II (Oracle) (Adm 506)
SAP Netweaver As Implementation Und Operation I (SAP Tadm10)
SAP Netweaver Portal - Implementation And Operation (Tep10)
ITL Foundation V4
Profile
Frequently asked questions
Do you have questions? Here you can find further information.
Where is Jorge based?
What languages does Jorge speak?
How many years of experience does Jorge have?
What roles would Jorge be best suited for?
What is Jorge's latest experience?
What companies has Jorge worked for in recent years?
Which industries is Jorge most experienced in?
Which business areas is Jorge most experienced in?
Which industries has Jorge worked in recently?
Which business areas has Jorge worked in recently?
Does Jorge have any certificates?
What is the availability of Jorge?
What is the rate of Jorge?
How to hire Jorge?
Average rates for similar positions
Rates are based on recent contracts and do not include FRATCH margin.
Similar Freelancers
Discover other experts with similar qualifications and experience
Experts recently working on similar projects
Freelancers with hands-on experience in comparable project as a Data Architect
Nearby freelancers
Professionals working in or nearby Würzburg, Germany