The responsibility of interaction
Since discovering we could use gravity and/or pressure and/or combustion to power assistive technologies, our species has been on a relentless, inventive pursuit of determining the next best means of moving goods and people from point A to point B.
And despite our collective desire to continue developing automotive technologies, witnessed in the seemingly incessant stream of designs and features released annually by multinational automotive corporations, the industry’s innovative gains are perhaps better represented as the gentle, recurring spikes of a cardiogram versus a lightning flash of enterprise genius.
Still, sometimes a single event aims to modify our approach to a technology as a whole.
With the advent of semi-autonomous technologies designed to replace traditionally interactive assistive machines, like the Self-Driving Car and Chauffer software by Google looking to supplant the human-guided automobile, some are beginning to contemplate the potential outcomes of removing the responsibility of interaction.
What does it mean for our society to move from assistive technologies, where the human role is the driver, figuratively and literally, to those that are fully autonomous? When an autonomous machine fails, where is the weight of that failure realized?
Simply, there is significance in this shift in causality. As we invent and invite advanced technologies into our lives, we must cooperatively reevaluate the assignment of responsibility.