Snehal S. Dikhale

Robotics Researcher @ Honda Research Institute

5 yrs industryM.S. Robotics

The industry has mastered seeing. I want to give robots the ability to truly feel. That means cracking tactile representation and using it to unlock dexterous manipulation.

Right now I'm extending Vision-Language-Action architectures with touch, connecting foundation models all the way down to real hardware.

I'm equal parts researcher and engineer, and I genuinely love both. I call it hardware intuition. Always happy to connect, whether it's research, life, or just geeking out about robots.

Snehal Dikhale headshot

Experience

Research Engineer II, Robotics

Current

Honda Research Institute  ·  Apr 2024 – Present

25%

tasksuccess

30%

poseconsistency

3

patentsfiled
  • Architected a multimodal reasoning system by extending Vision-Language Model (VLM) backbones with custom tactile, force, and depth modality projection layers, achieving efficient failure reasoning significantly faster than state-of-the-art baselines
  • Co-designed a multimodal self-attention action-conditioned model to generate context from vision, tactile, and proprioception, improving task success by 25%; mentored an intern through the full lifecycle of this module
  • Designed and developed a spatio-temporal graph-based representation learning framework combining video, depth, proprioception, and taxel-level tactile data, improving pose estimation temporal consistency by 30% in dynamic environments
  • Collaborated with cross-functional and international research teams to drive end-to-end project execution
  • One of 30 individuals selected across all of Honda North America for leadership development training
VLMsMultimodal ReasoningSpatio-Temporal GNNsIntern Mentorship

Research Engineer I, Robotics

Honda Research Institute  ·  Sep 2020 – Apr 2024

65%

sim-to-realgap

tactileresolution

100+

robotexperiments
  • Built an end-to-end perception pipeline for dexterous in-hand pose estimation under heavy occlusion, owning the full stack from algorithm design to deployment; validated robustness via 100+ real-robot experiments
  • Solved the Sim-to-Real gap (closing it by 65%) by engineering a 220k-sample domain-randomized visuotactile dataset (RGB-D + tactile) in Unreal Engine
  • Designed hardware-agnostic CNN-based and graph representations for vision, depth, and 3D tactile sensor fusion, reducing position error by ~35% and angular error by ~64% over vision-only baselines
  • Contributed to contrastive learning techniques for taxel-based signals, achieving a improvement in tactile resolution to enhance fine-grained manipulation capabilities
Sim-to-RealVisuotactile Sensing6D Pose EstimationContrastive Learning

Graduate Researcher

Worcester Polytechnic Institute  ·  2018 – 2020

78%

grasp success

3.8

GPA

4

projects
  • M.S. Thesis: Built a simulation and benchmarking framework (Gazebo, MoveIt, Panda Arm, RealSense) to evaluate deep learning grasping algorithms; achieved 78% success with GQCNN and 65% with GPD on RGB-D data
  • Human-Robot Handover: Designed interaction experiments using ROS/Python; trained ProMPs to predict Object Transfer Point (RMSE < 0.2m)
  • Simulation of Control Techniques: Implemented Robust, PD, and PD+Gravity controllers for Baxter Arm; evaluated via MATLAB simulations
  • Predicting Building Energy Consumption: Applied regression, random forest, and neural networks; achieved RMSE 1.27, ranked top 30% on Kaggle
  • WIN Women's Young Investigator Fellowship – Awarded to the top 4 graduate female researchers in STEM at WPI
  • Mentor, Women's Research and Mentorship Program – Mentored 1 undergraduate and 2 high school students; led robotics and 3D printing workshops
ROSGazeboMoveItDeep Learning for Grasping

Selected Patents & Publications

Vision-Language Models for Failure Reasoning in Multi-Fingered Dexterous Manipulation

S. Zhao*, S. Dikhale, et al.

* Mentored Intern

US Provisional Patent, 2025

Provisional Patent

Context-Aware Multimodal Action Planning Using Tactile, Vision, and Language

A. Shahidzadeh*, S. Dikhale, et al.

* Mentored Intern

US Provisional Patent, 2025

Provisional Patent

DynastGNN: Dynamic Spatio-Temporal Hierarchical Graph Neural Network for Visuotactile 6D Pose Estimation of an In-Hand Object

Snehal Dikhale, et al.

Paper in Progress, 2025

HyperTaxel: Hyper-Resolution for Taxel-Based Tactile Signal Through Contrastive Learning

Hongyu Li, Snehal Dikhale, Jinda Cui, Soshi Iba, Nawid Jamali

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2024

IROS

Hierarchical Graph Neural Networks for Proprioceptive 6D Pose Estimation of In-hand Objects

Alireza Rezazadeh, Snehal Dikhale, Soshi Iba, Nawid Jamali

IEEE International Conference on Robotics and Automation (ICRA), 2023

ICRA

ViHOPE: Visuotactile In-Hand Object 6D Pose Estimation with Shape Completion

Hongyu Li, Snehal Dikhale, Soshi Iba, Nawid Jamali

IEEE Robotics and Automation Letters (presented at ICRA 2024), 2023

IEEE RAL

VisuoTactile 6D Pose Estimation of an In-Hand Object Using Vision and Tactile Sensor Data

Snehal Dikhale, Nawid Jamali

IEEE Robotics and Automation Letters (presented at ICRA 2022), 2022

IEEE RAL

Technical Skills

Research & AI

Embodied AIVision-Language Models (VLMs)Large Language Models (LLMs)Multimodal LearningDexterous ManipulationSim-to-Real Transfer3D Tactile RepresentationGraph Neural NetworksTransformersContrastive LearningComputer VisionDeep Learning

Software & Frameworks

PythonC++PyTorchHugging FaceCUDADockerGit

Simulation & Tools

ROS / ROS2MuJoCoIsaac SimUnreal EngineGazeboMoveItBlender

My Story

Medium

Robotics Chose Me, But I Choose It Every Day

A personal essay on what it really means to build a career in robotics — the doubt, the obsession, the hardware that breaks at 11pm, and why I'd choose it all over again.

Read on Medium