Cyber Chat: Klaus Krippendorff responds
Bookmark0

In this article, Klaus Krippendorff responds to points of discussion, raised in the chat during his RSD10 keynote “From Uncritical Design to Critical Examinations of its Systemic Consequences.” The questions raised by Peter Jones, Anja Overdiek, Jan Lelie, Elisa Giaccardi, Derek Lomas, and Ben Sweeting are absorbing arcs of insight into critically, data, cybernetics, and ethics.

Professor Dr Klaus Krippendorff’s research focuses on the role of language and dialogue in the social construction of reality. Klaus calls on designers to become critical of what their work supports, and cognizant of and accountable for the systemic consequences of their designs. He also suggests that designers move from uncritically embracing innovations to an emancipatory perspective on the systems they enable.

Klaus Krippendorff responds

Peter Jones – Can we activate criticality as corrective feedback – without a theory of design power? After all, we are intervening in second-order decision-making. That is, as individual designers, academics, etc., how can we amplify our critique of toxic or maladaptive cyber-systems with our soft power, short of “using politics?”

Klaus Krippendorff – Peter, you think designers have only soft power that does not measure up to toxic maladaptive cyber systems. I encourage you to recognise that the only “power” technological systems have over you is the power you grant them. For example, if you conceive complex computers as intelligent machines, you are already giving up some of the intelligence you have, favouring their mechanistic accomplishments. You do not need to succumb to pretentious claims. Taking action is what you can do; machines can’t.

Anja Overdiek – Shouldn’t we contest the notion that technology is socially neutral? That might be true in theory, but in practice, humans (and their power systems) are entangled with technology.

Klaus Krippendorff – Absolutely. The conception that technology is socially neutral may have been true when technology was simple. Driving a nail in a piece of wood with a hammer can be done by almost everyone – women or men, white or black. But we live in a different world. Getting a loan from a bank requires passing a statistical test of whether you are worthy of receiving one. You don’t know what this test measures, and if the bank denies you the loan, you may never know why. But it may well be that you live in a zip code with a predominantly black population, or you have a Muslim sounding name – but nobody will tell you.

Peter Jones: In cyber autonomy, design lags tech engineering. Algorithms anticipate business models and are exploited in platforms before design wakes up. We have no entry point, no invitation to redirect (unless we have user data), and we end up designing services for the algorithms – a huge lag in influence. In this realm, it’s like HCI design in the 1980s in software before PC platforms were standardised. Also, the early Web. Now AI. Do algorithm engineers care about speculative design ethics? From my experience, I can say that they respect data. If you have the data to make the case, not the “storytelling.”

Klaus Krippendorff – You are quite correct that the dominant argument for a design is the available data. But you don’t have to buy into such arguments. As I mentioned, data is always from yesterday and always represents what exists. If designers are willing to articulate their mission of creating artifacts that do not already exist, they should find evidence for unused possibilities, not data from yesterday.

Jan Lelie – One can only have “critics” in language. For instance, the ability to say “good” or “bad”. This triggers with me the story of Genesis: Adam names all plants and animals, giving them their “true name” (because when one knows the name, one can have control). Then they are expelled from paradise, as they have eaten (feedback!) from the tree of knowledge of good and evil. (Of course, a man would blame a woman, but he had the choice not to eat.) So does our using language make objects (and products) “critical”?

Klaus Krippendorff – Yes, those who control the language spoken by others can make others do things they may not want to engage in. However, you do not need to buy into the language of others. As your parable suggests, you do have choices. For designers, critique should not merely express likes and dislikes – note that likes and dislikes can be expressed mainly for existing alternatives. The crucial feature of critique is comparing something with what could be – but is not realised. This is the kind of issue that design should be concerned with.

Elisa Giaccardi – The acknowledgement that making data technology more ethical requires adherence to principles of responsible innovation across public and private sectors usually translates into legislative frameworks and methods for explicitly incorporating ethical considerations into policymaking, business decisions, and everyday design processes. But these outcomes do not tackle the urgent need for a fundamental rethinking of design as a practice.

Klaus Krippendorff – Quite true. Ethical principles are usually formulated when something undesirable gets in the way of justice. Similarly, administrative rules tend to require lengthy processes to formulate and adopt. They may well cover the worst infringements, but clever people may have already found a way to bypass them when they are enforceable. I think designers need to be ahead of ethical considerations.

Derek Lomas – It seems like there is an opportunity for more education around critical cybernetics — I think the concepts are near forgotten. Ironically, cybernetics feels so inhuman when it can provide a healthier way of interacting with technology.

Klaus Krippendorff – I couldn’t agree with you more. Education for critical design is of the essence. I tried to show that classical cybernetics has essentially exhausted its mission in creating increasingly autonomous systems that call for everyone to submit without critique. I am working on critical cybernetics that does not deny classical cybernetics (of machines) but goes beyond designing technologies, including the language of understanding them, making it more human-centred.

Ben Sweeting – @peter – Yes, yes, yes, and when things come in later, they become just ways of moderating or mitigating from the outside. So it seems like ‘ethics’ is something from the outside with which to trade-off against efficiency, etc., and it gets lost in policy frameworks rather than fundamental rethinking, as @elisa mentioned earlier.

Klaus Krippendorff – I am not sure, Ben, whether this is a question. As you may have noted, much of what I said is ethically motivated but doesn’t formulate ethical rules, which can be oppressive.

Ethical rules must be ethical. They should not be stated by an authority (scientific, philosophical, or humanist). It might be naïve to think that listening to others, encouraging conversation among stakeholders of designerly interventions, and providing others’ spaces for designing their worlds (delegating design) is sufficiently broad to prevent oppression by algorithms.

From RSD10 Presentation – Intervening in the ecology of artifacts: all designs start as a proposal, move through networks of stakeholders, with resources, organizational knowhow enter particular assemblies of other artifacts for the benefit or detriment of individuals, their organizations, and larger institutions until they are retired, disassembled and enter the environment.

Posted January 2022
Share This