
AI, Ethics, and the Feeling Designer: What Automation Means for Human-Centred Practice
As AI tools accelerate through the design industry, the designers whose practice is most fundamentally human are asking a more precise question than most: not what AI can do, but what it cannot.
The speed at which AI tools have entered professional design practice has left most of the industry's ethical frameworks behind. Design studios are using generative image tools and automated layout engines and AI writing assistants that have been adopted faster than anyone has thought carefully about what they mean for the communities those studios serve. The conversations happening inside firms and agencies tend to be practical rather than principled: which tools are worth the licence fee, whether AI-generated assets need to be disclosed, whether clients will notice or care. These are not nothing. But they are downstream of more important questions that most of the industry is not yet asking.
For designers working in participatory, relational, and feeling-led modes โ practitioners whose practice is most explicitly about the quality of human relationship and the depth of community knowledge โ the arrival of AI tools raises something more specific than general questions about industry disruption. It raises questions about what happens to the things that matter most in their practice when automation becomes the dominant efficiency logic. What happens to deep listening when the listening is summarised by a tool? What happens to the somatic intelligence of a skilled facilitator when the facilitation notes are processed by an algorithm? What happens to the trust built in genuine participation when the community consultation is replaced by a trained dataset?
These questions are not hypothetical in Australia in 2026. They are being answered, by default, in practices that have adopted AI tools without examining what those tools assume about where design knowledge lives and what design process is for.
Australia's Framework for Thinking About This
In 2019, Australia's national AI Ethics Framework was published through CSIRO's Data61, establishing eight AI Ethics Principles for organisations developing and deploying AI systems in Australia. The principles are: human, societal, and environmental wellbeing; human-centred values; fairness; privacy protection and security; reliability and safety; transparency and explainability; contestability; and accountability.
What is striking about this framework, read from a participatory design perspective, is how closely it maps to the values already embedded in good co-design practice. Human-centred values. Fairness in distribution of benefits and harm. Transparency about how decisions are made. Contestability โ the ability for affected communities to challenge outcomes. Accountability for those outcomes. These are not abstract ethical aspirations. They are the principles that make the difference between a design process that is genuinely participatory and one that performs the appearance of it.
The CSIRO framework is voluntary for most private sector organisations. Research from UNSW and other Australian universities has documented the gap between the principles as stated and the practices that emerge in organisations adopting AI rapidly. That gap is familiar to anyone who has studied the difference between stated values in design practice and the actual choices made under commercial pressure. The principles are easier to hold when things are slow and clear. They are tested most by speed and complexity, which is precisely the environment that AI adoption creates.
The principles already in the room
Australia's eight AI Ethics Principles were not developed for designers specifically โ but they describe exactly the values that participatory design practice has always tried to embody. Reading the framework as a design practitioner is less like encountering new ideas and more like seeing familiar commitments translated into policy language. The question is whether the design industry will bring the same seriousness to AI ethics that the best participatory practice brings to power and process.
What AI Assumes About Design Knowledge
Every technology embeds assumptions. The assumptions embedded in most current AI design tools are worth making explicit, because they are assumptions that participatory designers have been questioning for years.
The first assumption: that knowledge useful for design exists in the form of data that can be aggregated, trained on, and queried. This is true of some knowledge. It is not true of the knowledge that matters most in relational and participatory design. The grandmother in a consultation room whose understanding of her community's needs comes from seventy years of lived relationship with that community is not expressing aggregatable data. The First Nations knowledge holder whose way of reading Country is embedded in oral tradition and practice is not offering a dataset. The community member whose most important insight emerges after three hours of genuine conversation, in a moment of trust that could not have been predicted or scheduled, is not accessible to a system trained to extract and process information efficiently.
The second assumption: that the valuable output of a design process is the artefact, the report, the document, the deliverable, and that everything else โ the conversations, the relationships, the shifted understanding โ is overhead. AI tools are extraordinarily good at accelerating artefact production. They are useless for the work that participatory designers consider most important. That is not an argument against using AI tools. It is an argument for being precise about what they are accelerating and what they are not touching.
The third assumption: that the designer's primary contribution is intellectual synthesis โ pattern recognition and sense-making from gathered information โ and that this is the function worth automating or augmenting. For a graphic designer whose primary contribution is visual problem-solving, this assumption has some validity. For a participatory designer whose primary contribution is the quality of their relational presence in a room, the assumption misses the point entirely. The somatic awareness, the capacity to hold space, the embodied intelligence of a skilled facilitator โ these are not knowledge-processing functions. They are not automatable because they are not computational.
The Specific Risks for Community-Facing Practice
The risks that AI tools introduce for community-facing design practice are not the risks most commonly discussed in the design press, which tends to focus on job displacement and copyright. The more immediate risks are about the quality of participation and the location of power.
When an AI tool is used to analyse consultation transcripts and extract themes, it makes decisions about what matters and what does not. Those decisions embed assumptions about relevance, significance, and meaning that were not made by the community whose words are being processed. The community has no visibility into those assumptions. They cannot contest them. The output looks authoritative โ themes, patterns, insights โ but it is a translation of community knowledge made by a system the community did not consent to, in a language the community did not choose.
This is a dignity problem before it is a technical one. It recreates, in algorithmic form, exactly the extraction dynamic that participatory practice has spent decades trying to dismantle: a community's knowledge being taken, processed elsewhere, and returned as something that sounds like their words but is no longer theirs.
For designers working with First Nations communities, the risks are more acute. The Australian Indigenous Design Charter is explicit about protocols for how Indigenous knowledge is gathered, held, attributed, and used. AI systems that process community knowledge without transparent disclosure of how that processing works, and without the ability for communities to contest what has been done with their knowledge, are in direct tension with those protocols. Practitioners using AI tools in First Nations co-design contexts should be applying at least the same scrutiny to those tools that the Charter asks them to apply to every other aspect of their process.
AI tools and the translation gap
The translation gap โ where First Nations knowledge enters a design process but fails to shape the outcome โ is already a critical problem in Australian co-design. AI tools that process consultation transcripts or extract themes from community knowledge introduce a new layer of that gap: one that is invisible to participants, operates without their consent, and produces outputs that appear authoritative regardless of how much community knowledge has been lost in translation.
What the Feeling Designer Brings That AI Cannot
There is a precise version of what AI cannot do that is more useful than the vague humanist claim that only humans can be truly creative.
AI cannot hold space. The quality of presence that a skilled somatic facilitator brings to a room โ attentive to what is said and unsaid, responsive to shifts in the collective nervous system, willing to follow what is emerging rather than what was planned โ is not a pattern recognition function. It is a relational capacity that has no computational equivalent.
AI cannot be accountable. The practitioner who makes a decision in a facilitation and feels the weight of that decision in their body, who stays with the community long enough to see how the decision plays out, who returns the following year to a community whose trust they are still earning โ this practitioner is in relationship in a way that creates real accountability. AI systems do not have the relational continuity that makes accountability meaningful.
AI cannot be affected. One of the marks of a mature participatory practitioner is the willingness to be changed by what they encounter. To have their assumptions shifted by a community's knowledge. To learn, over time, something that dismantles what they thought they knew. This is not incidental to good participatory design practice. It is its engine. A tool that processes information cannot be affected by it.
None of this is an argument for refusing to engage with AI tools. The practical design work of synthesis, documentation, visual exploration, and communication can be genuinely assisted by AI tools in ways that free practitioners for more relational work. The argument is for precision: about what AI is assisting and what it is not, about where the practitioner's irreplaceable contribution lies, and about the ethical scrutiny that any AI tool used in community-facing practice deserves.
Apply the eight principles to your current tools
Go through CSIRO's eight AI Ethics Principles and apply them to one AI tool currently in use in the practice. Not as a compliance exercise โ as a genuine audit. Which principles does the tool meet? Which does it not? What would need to change in how the tool is used for the principles to be honoured? That exercise is where a practitioner's AI ethics stance becomes specific enough to act on.
The Conversation Worth Having
The design industry's conversation about AI needs to broaden from the efficiency and disruption framing that currently dominates. For practitioners whose work is explicitly about the quality of human relationship and the depth of community knowledge, the more important conversation is about what AI tools assume, what they displace, and what they cannot touch.
Australia has a sophisticated national ethics framework for AI. It has universities producing research on the gap between stated principles and practice. It has a community of participatory design practitioners who have spent years thinking rigorously about power, knowledge, and the ethics of process. Bringing those communities into conversation with the AI ethics conversation would produce something more useful than most of what the design press is currently generating.
The Feeling Designer's readers are well-positioned to contribute to that conversation. Practitioners who understand safety, belonging, and dignity as structural design concerns; who know what it means to attend to power in a room; who have built practices around genuine accountability to communities โ these practitioners bring something to the AI ethics conversation that neither technologists nor ethicists alone can provide. They bring the practitioner's knowledge of what a well-held human process actually looks like. That knowledge matters. The conversation needs it.