Israr A.

Senior Mechanical Engineering AI Evaluator

Taylor Lake Village, United States

Experience

Jul 2019 - Present
6 years 5 months
United States

Senior Mechanical Engineering AI Evaluator

JLL

  • Led AI evaluation for mechanical engineering model validation using Python, NumPy, Pandas, and SciPy to preprocess and analyze simulation datasets from ANSYS, COMSOL Multiphysics, and ABAQUS.
  • Designed and executed model evaluation pipelines in TensorFlow, PyTorch, and exported inference to ONNX for cross-platform validation and benchmarking with TensorBoard.
  • Implemented model training reproducibility and experiment tracking using MLflow and Weights & Biases, integrating metrics into Prometheus and dashboards in Grafana for SLA monitoring.
  • Built containerized evaluation environments with Docker and orchestrated batch evaluation and A/B testing using Kubernetes and Azure DevOps pipelines.
  • Automated data validation and schema checks for simulation outputs using Great Expectations, Parquet data lakes, and Apache Arrow for high-performance I/O.
  • Collaborated with simulation teams to convert ANSYS, LS-DYNA, and OpenFOAM output into structured HDF5/Parquet datasets, analyzed with Pandas and visualized in ParaView and Matplotlib.
  • Applied explainability techniques SHAP and LIME on ML surrogates trained on MATLAB/Simulink and SolidWorks generated features to root-cause model drift and bias.
  • Developed performance test suites with pytest, pytest-benchmark, and integrated with GitHub Actions and CircleCI for CI/CD gating of model changes.
  • Led cross-functional reviews with data scientists and mechanical engineers to align AI outputs with engineering tolerances from COMSOL and ABAQUS scenarios.
  • Implemented inference hosting and monitoring using AWS SageMaker and Google Cloud AI Platform, instrumenting logging via Prometheus and traces via gRPC endpoints.
  • Designed RESTful APIs with Flask/FastAPI and gRPC endpoints to serve validated model predictions to engineering tools and dashboards.
  • Performed statistical validation and uncertainty quantification using scikit-learn, SciPy, and custom Monte Carlo pipelines to evaluate model reliability under variable loads.
  • Optimized data pipelines for large simulation archives using PostgreSQL for metadata, MongoDB for unstructured logs, and CSV/Parquet storage for bulk artifacts.
  • Created model optimization recommendations: pruning with TensorFlow Model Optimization, quantization-aware training, and conversion to ONNX to improve latency for engineering inference.
  • Conducted fault-injection and robustness tests using synthetic perturbations derived from SolidWorks parametric studies and OpenFOAM CFD variation sets.
  • Documented evaluation procedures, validation criteria, and acceptance tests in versioned artifacts stored in Git, with release automation through Azure DevOps.
  • Mentored junior engineers on integrating MATLAB simulation outputs into ML training sets, and best practices for reproducible experiments with Jupyter notebooks and Weights & Biases.
  • Spearheaded initiatives to standardize model evaluation metrics (MAE, RMSE, R2, calibration, reliability diagrams) across teams and instrumented dashboards to track these KPIs using Grafana.
May 2014 - Jun 2019
5 years 2 months
United States

Software Engineer

RealPage, Inc

  • Developed ML pipelines for engineering data analysis using Python, Pandas, NumPy, and scikit-learn to validate surrogate models for mechanical behavior.
  • Integrated simulation outputs from ANSYS and ABAQUS into training datasets, using ParaView and Matplotlib for visualization and inspection of potential failure modes.
  • Built prototype neural surrogate models in Keras and TensorFlow, performed cross-validation, and exported models to ONNX for interoperability.
  • Implemented experiment tracking and model versioning with MLflow and used TensorBoard for training diagnostics and hyperparameter tuning.
  • Authored automated test suites with pytest and benchmarked inference performance across CPU/GPU using pytest-benchmark and Docker containers.
  • Collaborated with DevOps to deploy model evaluation workloads on AWS SageMaker and container registries; used Git and GitHub Actions for CI.
  • Performed statistical analyses and uncertainty quantification using SciPy and Monte Carlo sampling to quantify model confidence for engineering use cases.
  • Used MATLAB and Simulink to generate labeled datasets and to verify ML model outputs against physics-based simulations.
  • Applied explainability tools (SHAP, LIME) to interpret surrogate predictions and guided feature engineering from CAD exports (SolidWorks).
  • Documented findings, created reproducible Jupyter notebooks, and delivered technical reports outlining model limitations and recommended remediation strategies.
Jul 2012 - Apr 2014
1 year 10 months
United States

Software Engineer

Foreflight

  • Contributed to engineering analytics and early ML model evaluation using Python, NumPy, Pandas, and scikit-learn to analyze flight and mechanical telemetry.
  • Processed time-series and sensor data using SciPy and custom signal processing pipelines, visualized results with Matplotlib and ParaView.
  • Built and validated prototype regression models in scikit-learn and Keras, assessing performance with cross-validation and statistical metrics (RMSE, MAE).
  • Worked with CAD-derived inputs from SolidWorks for feature extraction and prepared simulation-driven training sets from ANSYS exports.
  • Packaged evaluation scripts into Docker containers and maintained code in Git repositories integrated with CircleCI for automated tests.
  • Developed basic REST APIs using Flask to serve model predictions to internal tools and dashboards.
  • Implemented monitoring of model performance baselines and drift detection using custom scripts and integrated logging with MongoDB.
  • performed sensitivity analysis and scenario testing using Monte Carlo methods and SciPy optimizers to validate model robustness.
  • Collaborated with cross-disciplinary teams to align ML model outputs with mechanical engineering acceptance criteria and produced comprehensive validation reports.
  • Maintained Jupyter notebooks and internal documentation demonstrating reproducible evaluation workflows and experiment artifacts tracked with MLflow.

Summary

Seasoned engineering leader with 13+ years driving AI-augmented mechanical engineering validation and model evaluation.

Proven track record delivering rigorous model assessments, reliability testing, and domain-aligned optimizations that reduced model error and accelerated deployment timelines.

Hands-on with industry toolchains for simulation, ML evaluation, and MLOps—bridging mechanical engineering and data science to produce production-ready AI systems.

Languages

English
Native

Education

University of Houston-Clear Lake

Bachelor of Science · Computer Science · Houston, United States

Certifications & licenses

AWS Certified: Machine Learning Foundations

Certified SAFe® 5 Agilist

Microsoft Certified: Azure Developer Associate

Need a freelancer? Find your match in seconds.
Try FRATCH GPT
More actions