RADAR-SIGN-BRIDGE
Advancing British Sign Language interpretation and inclusion through non-intrusive radar technology.
Can smart devices respond to British Sign Language, not only spoken commands?
Overview
A more inclusive route to smart-device interaction.
Many smart technologies are designed around voice: "turn on the light", "set a timer", "call someone". But voice-first systems do not naturally support people who communicate using British Sign Language.
RADAR-SIGN-BRIDGE is an EPSRC-funded research project at the University of Southampton developing privacy-preserving radar technology so future smart devices could respond to BSL, not only spoken commands.
We are exploring how radar can sense signing movement patterns without recording normal video images, helping us investigate a more private, accessible, and inclusive route for future BSL-based smart-device interaction.
Motivation
Why this project matters
Digital technology is becoming part of everyday life - in homes, workplaces, healthcare, education, public services, and care settings. But many systems are still designed mainly for spoken language.
This creates a digital divide for BSL users.
Smart speakers, voice assistants, and other connected devices often assume that people interact through speech. Camera-based sign language recognition may offer one route, but cameras can raise concerns around privacy, consent, lighting conditions, and continuous video recording.
RADAR-SIGN-BRIDGE investigates a different approach: using radar to support sign-based interaction without relying on cameras.
Our aim is not to replace human interpreters or the richness of Deaf communication. Instead, we are exploring how future technology could become more inclusive by responding to BSL-based commands in everyday contexts.
Our vision
Privacy-preserving BSL interaction with smart devices
We are working towards future systems where a person could use BSL-based commands to interact with smart devices, such as:
- Turning lights on or off
- Adjusting heating
- Setting reminders or timers
- Answering a doorbell alert
- Sending a message
- Making a call
- Controlling music or TV
- Asking for help in an emergency
These examples are not final. We want Deaf and BSL communities to help shape what should be prioritised first.
Approach
What makes radar different?
Radar senses movement patterns using radio waves. Unlike a normal camera, radar does not capture a standard visual image or video of a person.
For sign language research, radar may help represent how hands, arms, body movement, and signing motion change over time. The challenge is to connect these movement patterns with BSL signs, phrases, meanings, and eventually smart-device commands.
This makes radar a promising research direction for more private sign-based interaction, especially in homes or other settings where cameras may feel intrusive.
Workstreams
What we are developing
RADAR-SIGN-BRIDGE is developing tools, datasets, and models for radar-based BSL recognition and translation. The project includes work on:
1. Radar sign language simulation
We are developing LinguaRadar, a sign language radar simulation approach designed to help generate radar representations of BSL signing motion.
2. Synthetic radar datasets
A major research challenge is the lack of suitable radar datasets for BSL. The project explores how existing BSL video and motion resources could support the creation of radar datasets, addressing the absence of publicly available radar databases.
3. Radar-to-language modelling
Radar produces time-changing movement signals, but BSL is a full language with grammar, context, regional variation, facial expressions, and body movement. We investigate how radar signals could be connected to signs, glosses, phrases, and English commands.
4. Smart-device interaction
The practical focus is to explore how future smart devices could respond to BSL-based commands, reducing reliance on spoken interfaces.
Open questions
Research questions we are exploring
We are especially interested in questions such as:
- Which BSL signs or phrases should be prioritised first?
- Should early systems focus on individual signs, short phrases, or continuous signing?
- Should radar signals first map to BSL glosses before English?
- How should we handle sign transitions and natural signing?
- Which commands would be most useful for smart-device control?
- When should a smart device ask for confirmation before acting?
- What would make this technology trustworthy for Deaf and BSL users?
- Would radar feel more acceptable than camera-based recognition?
These questions are central to the project's current direction. The project update materials describe key research challenges around data scarcity, language modelling, gloss recognition, continuous signing, and radar-to-language evaluation.
With the community
Community feedback is essential
We want this research to be shaped with Deaf and BSL communities, not only for them.
Your feedback can help guide decisions about:
- which BSL commands should be included first
- which smart-device tasks matter most
- whether radar feels more acceptable than cameras
- what privacy and consent safeguards are needed
- how the system should behave when it is unsure
- what cultural and regional BSL considerations must be respected
- who should be involved in evaluating future prototypes
The project's ethics and responsible research approach emphasises open communication with stakeholders, especially Deaf communities, and incorporating their feedback into the project.
Share your viewsCurrent question
Help shape the first BSL smart-device commands
We are currently asking the community: which BSL commands would be most useful? Examples might include:
Lights
- on
- off
- brighter
- dimmer
Heating
- warmer
- colder
Media
- play
- pause
- volume up
- volume down
Communication
- call someone
- send a message
Daily support
- timer
- reminder
- calendar
- weather
Safety
- help
- emergency
- alarm
- doorbell or visitor alert
Care settings
- medication reminder
- support request
These are only starting points. We want to know what would be genuinely useful in everyday life.
Behaviour
How should the system behave?
For smart-device control, recognition is only one part of the problem. The system must also behave safely and respectfully. We want your views on questions such as:
- Should the system act immediately?
- Should it ask for confirmation first?
- Should it show what it understood before acting?
- Should users be able to correct it easily?
- Which actions should always require confirmation?
- How should the system respond if it is unsure?
This is especially important for actions such as emergency alerts, unlocking doors, sending messages, or calling someone.
Get involved
Who can get involved?
We welcome input from:
You can get involved by completing the feedback form, joining a future workshop, taking part in prototype feedback, or contacting the team about collaboration.
Team
Project team
Dr Shelly Vishwakarma
Principal Investigator
Lecturer, Department of Electronics and Computer Science, University of Southampton
s.vishwakarma@soton.ac.ukDr Heba A. Awad
Research Fellow
Dr Kainat Yasmeen
Research Fellow
Keniel Peart
PhD Researcher
Contact
Get in touch
For enquiries, collaboration, or community engagement.
Dr Shelly Vishwakarma
Lecturer, Department of Electronics and Computer Science
University of Southampton