Brandon B.

Mathematics Problem and Evaluation Consultant (LLM Fine-Tuning)

De Motte, United States

Experience

Mathematics Problem and Evaluation Consultant (LLM Fine-Tuning)

  • Designed and solved PhD-level math problems to test the limitations of large language models (LLMs), focusing on abstraction, proof construction, and symbolic reasoning.
  • Authored rigorous, step-by-step solutions with detailed annotations to evaluate multi-step problem-solving performance in LLMs.
  • Collaborated with AI researchers to develop curriculum-aligned benchmarks and identify reasoning gaps in advanced mathematical topics.
  • Contributed to model training pipelines by tagging errors, suggesting prompt refinements, and providing structured feedback.
  • Developed problem sets across number theory, algebra, combinatorics, and logic to stress-test model generalization and accuracy.

Research Scientist – Mathematics

Columbia University

  • Conducted interdisciplinary mathematical research bridging theory and applications in algebraic systems.
  • Published on mathematical logic and its integration into educational assessment frameworks.
  • Engaged in collaborative work exploring how AI can model and apply theoretical constructs to practical reasoning tasks.

Mathematics Curriculum and Reasoning Consultant

Open EdTech Initiative

  • Collaborated on the development of a benchmark suite for evaluating AI performance across mathematics curricula from undergraduate to doctoral levels.
  • Authored challenging mathematical prompts with layered reasoning requirements in fields including set theory, discrete math, and real analysis.
  • Designed solution paths that emphasized clarity, conceptual integrity, and rigorous logical progression.
  • Provided annotations and tiered difficulty ratings to support adaptive AI model training and feedback loops.
  • Worked cross-functionally with engineers and researchers to align mathematical task types with research objectives in LLM behavior diagnostics.

AI Model Evaluation Specialist – Mathematics

TELUS International

  • Assessed AI-generated math content with a focus on logical consistency, conceptual clarity, and response completeness.
  • Created structured evaluation rubrics to measure reasoning depth and correctness.
  • Worked with AI developers to improve model interpretation of complex math prompts, aligning LLM behavior with expert-level expectations.

Mathematics Lecturer and Problem Designer

University of Cambridge

  • Taught undergraduate courses in abstract algebra, proof techniques, and mathematical logic.
  • Designed and graded original problems for coursework and research-aligned student assessments.
  • Supervised student-led research on advanced topics, training learners in rigorous problem-solving and independent exploration.

Summary

Ph.D.-level mathematician with a strong foundation in abstract reasoning, symbolic manipulation, and advanced problem design.

Over a decade of experience in developing complex math problems, evaluating AI-generated content, and contributing to curriculum-aligned benchmarks.

Proven ability to transform sophisticated mathematical concepts into clear, structured explanations.

Adept at working independently in remote AI research environments while collaborating across teams to refine and assess large language models.

Languages

English
Elementary

Education

University of Cambridge

PhD · Mathematics · Cambridge, United Kingdom

Johns Hopkins University

MSc · Mathematics · Baltimore, United States · 3.85/4.0

Certifications & licenses

AI in Educational Assessment

Advanced Mathematical Instruction And Curriculum Design

University of Cambridge

Need a freelancer? Find your match in seconds.
Try FRATCH GPT
More actions