Nagaraju A.

Senior Data Scientist

Glasgow, United Kingdom

Experience

Nov 2023 - Present
2 years 1 month
Glasgow, United Kingdom

Senior Data Scientist

JPMC

  • Designed and deployed Bayesian Marketing Mix Models (MMM) using PyMC-3+ and PySpark, quantifying ROI and channel-level elasticity across retail and asset management portfolios
  • Engineered ETL and feature pipelines in Airflow and AWS Databricks, automating ingestion of terabyte-scale marketing, transaction, and behavioral data from S3, Hive, Postgres, and Kafka
  • Built Delta Lake and Apache Iceberg architecture supporting adstock, carry-over, and seasonal ETL transformations for model input
  • Implemented hierarchical Bayesian structures and regression-based MMMs using NumPy, PyMC, and TensorFlow Probability to model multi-region effects
  • Optimized PySpark jobs with Liquid Clustering and adaptive partitioning, reducing MMM data-prep runtime by ~40%
  • Automated model training, versioning, and deployment via MLflow and Databricks Asset Bundles, ensuring reproducibility and compliance
  • Streamed near-real-time ad-exposure and conversion data from multi-tenant Kafka clusters into model pipelines
  • Deployed probabilistic inference workflows on AWS EMR using distributed MCMC sampling, reducing convergence time significantly
  • Delivered model explainability dashboards in Plotly Dash visualizing posterior distributions, channel effects, and uncertainty intervals
  • Applied Bayesian regularization and feature selection techniques to optimize MMM performance
  • Integrated MMM outputs into Snowflake and AWS RDS for BI and marketing analytics consumption
  • Implemented data quality monitoring using Great Expectations and integrated validation across ETL workflows
  • Collaborated with quant research teams to embed MMM-driven elasticities into financial forecasting models
  • Automated CI/CD pipelines using Jules and ServiceNow for model retraining and deployment
  • Delivered cross-functional MMM insights to marketing, finance, and analytics teams to support budget optimization
  • Defined and implemented advanced eCommerce tracking for online transactions, allowing granular reporting on product performance and customer journey analysis
  • Integrated Google Analytics with Google Ads and CRM systems, enabling cross-platform attribution and seamless data flow
  • Utilized regression analysis, decision trees, and clustering to predict customer behavior and segment audiences for targeted marketing
  • Developed a multi-touch attribution model to accurately assign conversion credit across digital touchpoints, improving understanding of the customer journey
Sep 2018 - Jan 2023
4 years 5 months
Stevenage, United Kingdom

Data Scientist - MMM/ML activities

Glaxo Smith Kline

  • Designed and implemented Bayesian MMM frameworks in PyMC to evaluate ROI across multichannel marketing campaigns in consumer health and pharma domains
  • Built end-to-end ETL pipelines using Airflow, Kafka, Azure Data Factory, and Databricks Spark integrating CRM, sales, scheduling, and process data (>100 TB)
  • Developed probabilistic regression models with hierarchical priors to capture campaign, region, and HCP-level heterogeneity
  • Built schema-evolving data models using open table formats and ADLS Gen2 integration
  • Implemented Bayesian inference workflows with MCMC sampling on Azure Databricks for channel elasticity estimation
  • Developed custom priors to reflect domain knowledge such as decay rates, carry-over, and saturation effects
  • Automated training and evaluation pipelines using Azure ML and MLflow with version-controlled experiments
  • Implemented streaming analytics using Kafka and Flink to continuously refresh MMM datasets from digital and field systems
  • Built PySpark feature stores and validation layers to ensure data quality and consistency
  • Conducted model diagnostics using WAIC, LOO-CV, and posterior predictive checks
  • Created Power BI and Plotly Dash dashboards for marketing teams to visualize MMM insights and posterior ROI curves
  • Ensured data governance, lineage tracking, and GDPR/GxP compliance across all Azure data pipelines
  • Migrated legacy MMM workloads from on-prem HDP to Azure Databricks, improving scalability and reducing processing time by 60%
  • Built budget optimization simulators in Python using Bayesian Decision Theory principles
  • Partnered with commercial analytics teams to operationalize MMM insights into forecasting and promotional planning models
  • Worked on cloud hosted Kafka data sources streamed using Kafka Connectors and Flink
  • Created standardized SQL engine clusters using Presto DB
  • Created virtual cloud data warehouses using Snowflake and querying data using SnowSQL, Spark jobs, and Tez
  • Maintained documentation on Confluence and managed builds with Groovy on Jenkins
Mar 2017 - Aug 2018
1 year 6 months
Reading, United Kingdom

Data Engineer

Visa Europe

  • Performed data analysis on CDH5 and CDH6 clusters using Apache Hue
  • Managed autoscaling and maintenance of AWS EMR clusters
  • Designed massive data warehouse solutions to offload 800 TB of data from DB2 storage to Hadoop
  • Set up streaming processes for transactional and clearance data using Kinesis
  • Implemented workflow schedules using Airflow and Oozie
  • Implemented streaming ingestions using Kafka Confluent Platform consisting of 10 broker nodes from various data sources
Feb 2016 - Jan 2017
1 year
Madrid, Spain

Hadoop/Big Data Engineer

Solera Holdings

  • Worked with Hadoop, Sqoop, Hive, HBase, Spark, AKKA, Lucene, Solr, Pig, Pentaho, Hue, and Scala
Jan 2015 - Jan 2016
1 year 1 month
Nottingham, United Kingdom

Big Data Hadoop Developer

Silicon Integra Limited

  • Worked with Hadoop, Sqoop, R, Kite SDK, Kudu, Hive (CDH5.4, CDH5.6), HBase, Impala, Hue, Spark, Oozie, AWS EMR, Azure, Solr, Pig, Valuation and estimation algorithms, Paxata, Scala, and Presto DB
May 2014 - Jan 2015
9 months
Stockton-on-Tees, United Kingdom

Hadoop Developer / Analyst Consultant

Nortech Solutions

  • Worked with Hadoop, Sqoop, Hive, HBase, Spark, AKKA, Lucene, Solr, Pig, Pentaho, Hue, and Scala
Aug 2013 - Jan 2014
6 months
Hyderabad, India

Bigdata Developer/Engineer

Nextgen Solutions

  • Worked with Hadoop, Hive, Scala, JSF, MongoDB, HBase, ActiveMQ, and multithreading
Mar 2012 - Mar 2013
1 year 1 month

Bigdata/Hadoop Engineer

Tata Telecom

  • Worked with Hadoop Analytics, Pentaho, Java, Python, J2EE, and the Hadoop ecosystem

Summary

Having overall 13+ years of experience in Planning, Building, Implementation, and Integration of full-scale commercial projects in the different verticals like Financial, Retail, Insurance, Banking, High-tech, social media, Oil and Gas and Networking/Telecom. Having started career as Graduate Systems Engineer with TCS, I have involved in large scale Java and Hadoop projects which are highly scalable, distributed, and available.

Worked with various cloud environments like AWS, Azure and open-source cloud deployment and configuration tools like Open stack and open nebula. Gained hands on experience on NoSQL databases like Mongo DB, HBase and Cassandra. Worked on various Agile practices like TDD, BDD, pair programming, continuous Integration and Scrum.

Worked on programming languages like Java, Scala, Python, Golang, C, PySpark, Shell scripting, J2EE, JSF, Apache Hadoop eco System, Hortonworks, Cloudera, Accel Data ODP, ETL practices and Analytics platforms.

Languages

English
Advanced
Spanish
Elementary

Education

Jan 2014 - Jul 2015

Northumbria University

Master of Science · Computer Science · Newcastle upon Tyne, United Kingdom

May 2006 - Jun 2010

JNTU

Bachelor of Technology · Electrical, Electronics and Communications Engineering · India

Need a freelancer? Find your match in seconds.
Try FRATCH GPT
More actions