Artificial Intelligence, British Sign Language and the British Deaf Association

Discussion Paper

FOREWORD

Artificial intelligence, the world tells us, is the future. News media, government Ministers, private companies – all are enthralled by the prospect of change.

Deaf British and Irish Sign Language (BSL/ISL) signers have been identified as people who would benefit from this trend, with little effective engagement across the field. The British Deaf Association, as the UK’s recognised representative body for BSL and ISL, is determined to bring the voice of our members to this important subject. Our mission is to ensure that our language is respected and fully protected from harm and that includes working meaningfully with us on all aspects of our language.

It’s clear that AI BSL might bring many substantial benefits to the deaf community. For example, a massive increase in translation of government (and private company) websites could be a tremendous step forwards for accessibility. Deaf people are too often excluded by society’s unwillingness to translate vital information into our language, and AI BSL could lead to rapid change.

But there are serious risks, too. AI BSL is not a wonder-technology, despite the enthusiasm of people with vested interests. Neither is the task of developing the technology as simple and straight-forward as its adherents sometimes make out. How, for example, will AI BSL understand the different signs used in regional dialects around the UK? Will the signing of black or ethnic minority signers be studied as deeply as that of white signers and form an appropriate part of the data that feeds the AI system? Will the quality of machine interpretation be reliable enough to use, for example in court rooms or in medical appointments? These concerns do not even scratch the surface of what needs to be fully explored.

AI BSL appears to be coming, like it or not. We recognise that it cannot be stopped. But a successful development of AI BSL requires the deep involvement of the deaf signing community. These are important issues and – as government Minister Sir Stephen Timms MP told the House of Commons on March 20, 2025 – “deaf people need to be in the driving seat in resolving them.”

We believe that the Government needs to work with partners to support the establishment of a framework to set clear expectations on quality and safety in relation to AI BSL. This paper aims to start a discussion on what this framework will include, and how we can ensure that it is delivered to protect the interests of UK citizens and institutions. We hope that other voices will join in this discussion, so that multiple experiences and perspectives can lead to a truly informed approach.

For our part, we will do all we can to encourage deaf signers to make their voices clear. We are meant to be the primary beneficiaries of AI BSL, a unique position, so we expect to be heard.

Rebecca Mansell

Chef Executive, British Deaf Association

OUR APPROACH

  1. BDA uniquely exists to protect and promote the interests of the UK’s deaf BSL (and, in Northern Ireland, ISL) signers.
  2. BDA is, in principle, neither for or against the development and use of AI BSL. We are independent, objective, and open-minded about the part it can, where appropriate, play in society. We have no vested interest in any aspect of AI technology and will therefore approach the issue with honest independence.
  3. Deaf people should have leading professional roles in the development, design, delivery, deployment, and evaluation of AI BSL: we wish to see these advance securely and fairly.
  4. Deaf people are expected be consumers of AI BSL output: where this is provided, we wish to ensure that it is of high-quality, cost-effective, reliable and socially responsible.
  5. Protecting and promoting signers’ interests entails protecting and promoting the language itself, BSL. Development and deployment of misaligned AI BSL that diverts resources from the priorities of deaf signers, damages the fundamental size, strength or status of our linguistic community, or damages the rich and diverse fabric of our communal linguistic asset, should be guarded against.
  6. Signers’ interests are best protected when their engagement with the wider society is mutually satisfactory. We are therefore concerned to ensure that the use of AI BSL is effective for non-signers, too.
  7. As a matter of principle, we expect all stakeholders in this field to recognise and commit to empowering deaf BSL signers as primary, non-tokenistic decision-makers in all matters relating to our language and community. This goes to the very heart of our identity.
  8. BDA calls for the development and adoption of a strong framework to cover quality and safety issues to enable the safe and appropriate use of AI with our language for deaf people.

THE BIGGER PICTURE

We note with approval the existence of high-level frameworks to guide safe and ethical decision-making in the rapidly evolving field of general AI development and practical application, including the Council of Europe’s Framework Convention on AI, Human Rights, Democracy and the Rule of Law which was signed by the UK in September 2024.  This is a positive and tested starting point for discussions on a framework for AI BSL.

  1. The UK has a set of guiding principles to ensure that AI systems are built and used responsibly:
    1. Fairness: AI should treat everyone equally and avoid bias against any group
    2. Transparency: AI systems should be clear and easy to understand. People should know what data is collected, how it’s used, and who has access to it.
    3. Privacy: Personal information must be protected, and AI systems should respect people’s privacy.
    4. Accountability: Developers and organisations using AI should be held responsible for its decisions and impacts.
    5. Contestability and redress: If an AI system makes a harmful decision, people should have a clear way to challenge it and get it corrected.
  2. We value the work done within the European Union on these issues, noting sources such as https://digital-strategy.ec.europa.eu/en/library/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment which includes a detailed list of questions for AI proponents.

The EU list reflects the following commendable underlying principles:

  1. AI should not trample on human autonomy. People should not be manipulated or coerced by AI systems, and humans should be able to intervene or oversee every decision that the software makes.
  2. Services provided by AI should be available to all, regardless of age, gender, race, or other characteristics. Similarly, systems should not be biased along these lines.
  3. Data and algorithms used to create an AI system should be accessible, and the decisions made by the software should be understood and traceable by human beings. In other words, operators should be able to explain the decisions their AI systems make.
  4. AI systems should be sustainable (i.e., they should be ecologically responsible) and should enhance positive social change.
  5. AI systems should be auditable and covered by existing protections for corporate whistleblowers. The negative impacts of systems should be acknowledged and reported in advance.
  6. AI should be secure and accurate. It should not be easily compromised by external attacks, and it should be reasonably reliable.
  7. Personal data collected by AI systems should be secure and private. It should not be accessible to just anyone, and it should not be easily stolen.

 

  1. The above AI principles are just the starting point. Developers are expected to turn these ideas into action by doing things such as:
    1. Risk assessments: Checking for potential harms to users and finding ways to reduce them.
    2. Bias audits: Looking for unfairness in data or algorithms and fixing it.
    3. Transparency reports: Providing clear explanations of how AI systems work and what data they rely on.
    4. Stakeholder engagement: Consulting with the people affected by AI – including deaf communities – to make sure the system meets their needs.

BDA's experience is however that in the development of projects, services, and technologies, such principles and processes frequently fail deaf signers. Developers rarely involve deaf signers appropriately, if at all, in their work, and rarely understand the complexity of BSL.

Without working closely with deaf-led, BSL-led and deaf representative organisations to ensure deaf leadership in their design, management, delivery, and evaluation, there is a significant risk that AI BSL services and systems will: be developed or deployed in an unsafe way; exclude or misrepresent BSL signers; or fail to achieve effective or efficient use of resources.

WHAT A FRAMEWORK MUST CONSIDER

Why do we need stronger scrutiny of AI for BSL users?

  1. Quality and safety: in healthcare, legal, or emergency settings, errors in AI -generated BSL could lead to life-threatening misunderstandings.
  2. Risk of bias: AI systems may standardise BSL, wiping out regional variation and excluding older, younger or deafblind signers.
  3. Data safety and consent: generating AI systems relies on video recordings of signers. Using this data without proper consent is a major privacy concern. Already, we are aware of videos of trained interpreters, even where BSL may not be their first language.

  4. Devaluing human expertise: AI BSL could reduce demand for qualified human translators and interpreters, limiting career opportunities for deaf professionals.

 

Steps to ensure safe, ethical and inclusive AI BSL development could, in theory, include:

  1. Collecting data ethically: Videos of BSL should only be used with the full, informed consent of the people involved. BSL signers must reflect different ages, regions, and backgrounds. How can consent and language richness and diversity be safeguarded?

  2. Building partnerships: Developers should work closely with BSL-led and representative organisations from the start, including joint design workshops and regular progress checks. How can a trustworthy framework be developed to ensure that this happens?

  3. Setting up safety and safeguarding: A deaf-led BSL-led framework for safety could create transparency, ensure quality, and establish a clear and simple process, structured for reliability, effectiveness, and responsivity to support AI BSL services to be accurate, fair, and respectful of BSL signers. How would this be developed? What support would BDA and other BSL-led bodies need to make this happen?

  4. Training developers: Non-signing developers will need support and training to follow the lead of deaf professionals to ensure the safety and effectiveness of BSL AI services. They will need to understand how BSL works to be able to deliver a safe, effective and value-for-money service. How can this safeguarding be ensured? How do we encourage deaf signers to enter and work in AI BSL, and support their development and career progress? What are the feasible pathways to develop the careers and capacity of deaf signing professionals to take on senior management roles in AI BSL? What would be an appropriate timeframe to aim at?  What ways of tracking this progress are there?

  5. Joint policy development: Deaf-led and BSL-led organisations should work together to develop AI frameworks on a collaborative basis – with the input of external advisors – to create a unified framework for AI BSL safety, quality, value for money for public funds, and alignment with the needs of BSL signers. This framework should be able to support emergent innovations by and for BSL signers with a diversity of language needs in their use of AI systems. Technologies change and cultures evolve over time. Openness and responsivity in the policy framework is highly important to maintain continued alignment and effectiveness. BDA’s experience is that this can only be delivered through significant leadership by deaf-led and BSL-led organisations. What support would be needed to deliver this collaboration and achieve a clear, effective and unified framework?

CONTEXTS OF USE: TRANSLATION AND INTERPRETATION

At present, there appear to be two main functions envisaged for AI BSL:

  1. translation of pre-recorded material and
  2. (simultaneous) interpreting of real-time interaction. We would stress that there are important different issues associated with these – in particular, pre-recording allows, at least in theory, for quality checks in a way that live interpretation does not.

  • TRANSLATION
    • English-to-BSL translation has been made available to the public. In narrowly restricted domains (eg train station displays), it is appears to work effectively.
    • The practices and systems underlying this form of translation are now sufficiently well established in the industry that it appears to be a matter of only a few years before a. the ‘rough edges’ of early quality are reduced and b. the range of available domains increases to an effectively unlimited degree.
    • BSL-to-English translation is not yet available to the public. The speed at which the underpinning technology is emerging across the global industry, however, strongly indicates that such formats will appear in due course.

 

  • TRANSLATION: QUESTIONS THAT MUST BE ANSWERED
    • Does the existence of AI BSL systems mean that their use is always appropriate? If not, when is an AI translation inappropriate? Conversely, are there contexts when AI is especially suitable? Who decides what is appropriate?
    • At a time (post-BSL Act 2022) when more and more services are improving their accessibility in BSL, will the spread of AI-generated BSL output devalue the skills and employability of deaf translators (even as it increases the overall workflow)?
    • Who is planning what in the use of AI BSL, and how are decisions about priorities being made? The AI BSL company Signapse, for example, states that it has plans “across various sectors, including education, healthcare information, customer service, and entertainment”. That is a lot of sectors. It means that deaf people would essentially be encountering AI-BSL all day every day. What is the planned programme of developments and how is it determined?
    • The primary focus placed on English-to-BSL translation (and interpreting, as noted below) reflects the prevalent and yet simplistic attitude from non-signers that deaf people ‘need’ to have access to messages from the English-speaking majority. This is a political/ideological choice, not a merely natural state of affairs. What are its consequences and (how) do they need to be challenged?

  • INTERPRETING
    • Real-time interpreting between BSL and English is not yet available (in either direction). It is expected to evolve rapidly as the underlying principles can be adapted from translation systems, linguistic data-sets expand, and the efficient use of computational power continues to increase.
    • As with translation, it is expected that English-to-BSL interpreting will appear first – for example, simultaneous AI-generated BSL output might accompany spontaneous, unscripted material such as live interviews on television that are conducted entirely in English and contain no BSL in the original source material.
    • Two-way interpreting (ie between BSL-signing and English-speaking participants) is expected to be the last kind of output to become functionally available. The technical challenges are significant but not theoretically impossible to overcome. As Signapse puts it: “recent developments in AI include systems that can, in the future, recognise and interpret both speech and sign language gestures to enable real-time communication in public spaces, transport hubs, and customer service settings.”

 

  • INTERPRETING: QUESTIONS THAT MUST BE ANSWERED
    • Does the existence of AI BSL systems mean that their use will always be appropriate? If not, when is an AI interpretation inappropriate? Conversely, are there contexts when AI is especially suitable – for example, a deaf discussant on this issue recently noted that AI approaches raise the prospect of protecting client confidentiality in new ways. Who decides what is appropriate?
    • It is a core element of interpreter training that the interpreter is an active third participant in interaction who, when necessary, can and must both seek and provide clarification. As another deaf discussant noted, “If you ask the AI interpreter questions, can they reply?”

    • Human interpreters are trained to respond sensitively to addressing power imbalances, emotional implications and other complex nuances in interaction. Can this ever be matched by AI systems? A deaf contributor to discussions at the BDA AGM set the issues out most eloquently:

 

Human interpreters understand cultural nuances and context within BSL and deaf culture, which AI may not fully grasp. They can adjust their signing to reflect the appropriate tone, register and emotional context of a conversation. Furthermore, human interpreters build trust and rapport with the deaf client, offering emotional support and reassurance in sensitive situations. They can adapt their signing to adjust to emergent issues (eg clarifications, to suit the individual preferences of the deaf client, eg speed or specific regional dialects). AI solutions may be generic or overly formal which may not work with the user’s need… Human interpreters, in my opinion, remain essential for their ability to provide culturally competent, empathetic and accurate communication in diverse and complex situations.”

CONTEXTS OF USE: RESEARCH, EXPLOITATION & COMMERCIALISATION

In the early days of the AI revolution, it seemed likely that deaf people would be less affected by these technologies because there was little money to be made from small signing communities. This picture has changed for various reasons, including the decrease in the cost of processing ‘big data’. However, if BSL is considered profitable, are members of its core community benefitting?

  • Large Language Models (LLMs) of the kind that AI language systems require have to get their source data from somewhere. What data underpins AI BSL systems? How has it been procured (eg internet scraping)? Whose permission has been or should be sought for this? How, if at all, are the originators of this core material remunerated for their original content (without which the system could not function)? This ethical problem is common to all language-content producers, but small, minoritised linguistic communities are particularly vulnerable without robust representation and protection.
  • Where signing AI systems produce output in the likeness of a real person (eg a deaf translator), their Name, Image & Likeness (NIL) rights must be protected. How can we be assured that this protection is and will always be in place?
  • BDA is approached frequently by tech companies seeking to profit from a language and a community of which they have little or no understanding. Often, they are applying for Government funding which too frequently appears to get granted behind closed doors without any reference to deaf input. So who should be making AI BSL products? How are funding decisions being made on this? Where is the deaf voice?
  • As the profitability of AI BSL technologies increases, questions need to be highlighted:
    • Where will the profits go? – If not back into the language community which provides the raw linguistic materials, by what mechanism could this be addressed? Are costs and benefits of the existing solutions – ie services delivered by humans for humans – being fairly and carefully calculated and evaluated in comparison with AI products?
    • It is easy to get caught up in the ‘white heat of technology’ excitement and to make funding available for unsafe ‘scientific’ solutions for minorities (backed by the eloquent marketing claims of global corporations) which have not been reliably shown to improve upon existing tried-and-tested services provided by safe, closely-regulated human agents.
  • AI is rapidly underpinning everyday life in a multitude of spheres, is getting increasingly powerful, and is perceived to be increasingly cheap. Whilst the BSL world is starting to become aware of its current and forthcoming effects on translation and interpreting, how can we monitor what else may be on the horizon, and ensure that the above issues are addressed from the outset?
  • Questions arising therefore include:
    • How can we ensure mutual benefit, optimal impact, effectiveness, and safety for deaf communities, especially when financial priorities might influence developers’ decisions?
    • How can we make working with deaf organisations a requirement for AI developers while ensuring accessibility needs are met?
    • What strategies could encourage developers to prioritise accessibility to careers in the industry for deaf professionals?
    • How to ensure the input of deaf signers and our representative organisations (including BDA) lead the discussion when shaping future AI BSL governance strategies?

STANDARDS: PROTECTING BSL SIGNERS

The human professions of BSL/English translation and interpreting took many years to become established and are subject, by hard-earned consensus, to frameworks of safety, ethics, standards and practice in place since the 1980s. These protect the interests of all concerned and minimise harm and malpractice.

  • Who will develop, monitor, and update strong frameworks to shape AI BSL services in terms of a. standards and b. safety? How will these quality framework be established, monitored, and safe use of AI BSL enforced? What does appropriate practice look like?
  • Who is liable for AI-generated mistranslations (an issue familiar from other contexts such as debates about driverless cars)? There is a strong perspective insisting that we will always need humans in the loop. The counter-argument says this is expensive, will become increasingly unnecessary as the AI learns, and may well be impracticable anyway. How can consensus be reached and enacted on this before real harm is done?
  • It should be noted that any form of checking of AI output is particularly important and particularly difficult in interpreting contexts (where any human intervention would have to happen in real time).
  • We underline here the need for transparency of decision-making processes, a principle that is of paramount importance in human translation and interpreting professions: a worthy practitioner must be able to justify their work. As the EU framework we cited in section 2 above says, decisions made by the software should be understood and traceable by human beings, ie operators should be able to explain the decisions their AI systems make. Is this happening? How will it be robustly assured when AI output is being produced in huge volumes?
  • We wish to be assured in particular that the largest purchasers of BSL translating and interpreting – currently Government departments – are spending money wisely, including on health, justice and employment (eg the Access to Work scheme). Are they systematically receiving good advice? A significant and structural risk exists that they will be mis-sold AI solutions that sound cheap and efficient but create problems and costs in the long term. How to reduce this risk? How will government want to use AI BSL, and to what effect (on the professions, institutions and the community as a whole)?

NORMS: PROTECTING BSL

BDA understands that in a legal sense communities do not hold full ownership of their languages. Yet it is clear that deaf people (and their representative organisations) have a moral authority in relation to the linguistic forms of BSL. Our language constitutes the primary connective tissue giving rise to our community in the first place (it is through BSL that deaf people have always come together as a population) and continuing to hold us together over the centuries. Along with this authority goes our responsibility to safeguard our language for generations to come – deaf people like us who are biologically shaped to use a visual-gestural language which has naturally evolved to suit their specific personhood.

  • In this context, we are concerned about the potential effect of AI on the rich linguistic diversity, a significant element in shaping our identities, of which members of the BSL community are proud. This includes regional variation; dialects and language patterns associated with different age-groups; the potential for race and ethnic bias in the samples upon which AI systems are trained; under-representation or mishandling of variation associated with LGBTQI++ populations; and all other forms of social variation that form the dynamic tapestry of the national BSL community as a whole. What input will this full range of people have into AI systems? What effect will AI BSL have upon their proudly distinctive and communicatively significant linguistic identities?
  • It is baked into the structure of LLMs to prefer popular, frequently-occurring linguistic patterns, and the data with which they are most familiar comes from English. We note sources such as Empirical evidence of Large Language Model’s influence on human spoken communication and AI and the Death of Human Languages - Lux Capital which says:

“AI researchers show that models implicitly “think in English,” even when not specifically architected to do so… The underlying language trend is overwhelming. Human language offers the ultimate network effect: everyone wants to speak what everyone else can understand. The old joke goes that the world’s most popular language is bad English. As LLMs narrow the distance between cultures and commerce even further, the differences possible between communities are narrowing. Overwhelming incentives look set to wipe away all but a few languages by century’s end.”

  • BSL has a long history of attempts by well-meaning but ill-informed English speakers to remodel it along the lines of English grammar (Signed English; Sign Supported English; etc). BSL grammar is different from that of spoken language. Our language will be irreparably damaged if AI systems shift it away from its linguistic roots. How can these roots be firmly safeguarded?
  • If one of the “few languages” not wiped away (as the above quote suggests) by AI is a signed language, it seems likely either to be American Sign Language (not related to BSL) or International Sign (which is shifting steadily towards a more stable, standardised form as a result of global travel and digital technology connecting signers worldwide). It is foreseeable that either of these may become dominant, driven significantly by tech companies’ desire to increase profitability by reducing the number of languages they have to process. What safeguards can be established for our centuries old British language against this prospect?
  • Will some deaf consumers be (further) marginalised by the spread of technological signing solutions? On the one hand, some consumers (e.g. elderly people, deafblind people) are liable to be less comfortable with using digital platforms – will their access to communication be eroded if human support becomes less available? On the other hand, will AI systems be able to ‘read’ all faces equally well: for example, would the BSL grammar of a person with a facial difference be fully understood by an AI system trained (almost) exclusively on more typical faces? There is a risk in both respects that people the systems treat as ‘normal’ are further secured in this privileged position when technology orientates to their norms.
  • Are there imaginative solutions to which the AI BSL industry could voluntarily sign up in the interests of sustaining and growing their core audience? Could there, for instance, be a form of offsetting – eg firms pay a percentage of turnover to specific projects that protect and promote BSL, such as ongoing BSL description, corpus-building etc; or industry payment of the costs for hearing families to develop BSL fluency; or bursaries for deaf BSL teachers to qualify as GCSE schoolteachers?

SOME KEY QUESTIONS

  • BDA wants to ensure AI BSL services and systems are safely developed, deployed, and updated. How can we achieve this?

  • BDA wants to encourage and support deaf leadership in designing, developing, delivering, managing, and evaluating AI BSL services. How can this be built into the system?

  • What should a framework that ensures safe AI BSL and safeguards BSL signers contain?

  • On the understanding that deaf signers must lead on innovating AI BSL and developing AI BSL frameworks, who else should be involved to support this work of enabling high quality, safe provision?

  • Should safety frameworks also include checks that deaf signers featured in AI products have given meaningful consent for their videos to be used? And that companies mining data from online BSL videos have obtained appropriate permissions? How can signers protect their image and data from being misused?
  • BDA wants to ensure deaf people are able to understand and challenge unsafe or misaligned AI BSL systems. How can this be achieved?