Should we design things we don’t understand?

Should we design things we don’t understand?

Over the last few years you have probably heard the words machine learning (ML) and AI thrown around more and more. If I had a dollar for everytime I heard “machine learning,” at the last design conference I went to I’d be rich. So late last year I decided I needed to learn how to use machine learning algorithms. Using tools like Microsoft Azure Learning Studio you can get an ML algorithm running in a few minutes without ever writing any code. I was able to use the drag and drop interface to create algorithms to recommend products to buyers, perform text analysis, and make predictive models in very little time and without a lot of effort. This is pretty incredible! It opens up ML as a tool to more and more people by making it easy and accessible. 

Even for the most experienced practitioners, what happens on the inside of many algorithms cannot be interpreted by the nature of the selected algorithm structure and resulting complexity. As strides are made in interpreting algorithms, the new generation of algorithms becomes even more complex, outpacing human ability to interpret the inner workings [1].

This led me to an ethical dilemma: Is it ethical to design products with tools I don’t fully understand? 

We often call these types of tools blackbox functions or blackbox algorithms.

blackbox.jpg

We have some inputs, which might be some product parameters or it might be user data. Then we put them into our blackbox where some magic happens and poof! We get an output that hopefully gives us the result we want. In many of the products we interact with, whether they are software or physical products with firmware updates, the algorithms are being updated all the time to produce better results. This is where the learning of machine learning happens. As we get more data the results may get better and better. Most of the time this is harmless and it opens up a lot of tools to people so they can implement them without advanced knowledge. Sometimes, because we don’t understand what is happening in the blackbox, whether we are a beginner or advanced ML user, there are unintended consequences.

Let’s look at social media as an example. The following  makes reference to Facebook, but most social media platforms function in a similar way. The Facebook algorithm for your newsfeed has over 100,000 weighting parameters [2]. These algorithms are maximizing the amount of engagement you make with the platform. The blackbox nature of the algorithms has found that more extreme content will get more engagement out of users. This has helped drive polarization that has caused virtual behavior to have real world consequences such as the January 6th attack on the United States Capitol [3]. 

If I can’t understand the algorithms in my product should I release it?

These unintended consequences of not completely understanding our designs are not just limited to ML and social media. With the proliferation of internet of things (IOT) devices we have seen another problem arise. The barriers to developing such devices have fallen such that even someone with little experience such as myself have been able to create IOT devices after a few days of tutorials and prebuilt code libraries. There is a big problem with this: none of these tutorials said much about security. Many of our “smart devices,” have little security built in or we don’t perform updates to keep up their defences. In 2016 millions of these devices were hijacked in a distributed denial of service attack that brought some of the biggest sites on the internet down[4].

If I don’t understand how to keep my product safe should I release it?

This might all sound like doom and gloom. In spite of the challenges of emerging technology with unintended consequences from a lack of complete understanding I think we should continue to use them. The advances made by using advanced algorithms and by making technology more accessible to people have been amazing. We will always have a lack of knowledge in some regard about the products we design. My advice is to be realistic about what we don’t know about the tools made to design products and seek external help when necessary. It will also be wise to consider what kind of social, economic, and environmental impacts the products we create will have on users and the world.



1. Barber, Gregory. “Inside the 'Black Box' of a Neural Network.” Wired, 6 Mar. 2019, www.wired.com/story/inside-black-box-of-neural-network/

2. Irvine, Matthew. “Deblackbox Facebook News Feed Algorithm as A System for Attention Manipulation”. CCTP-607: "Big Ideas": AI to the Cloud, https://blogs.commons.georgetown.edu/cctp-607-spring2019/2019/05/07/design-ethical-implications-of-explainable-ai-xai/

3. Wu, Katherine J. “Radical Ideas Spread through Social Media. Are the Algorithms to Blame?” Nova, Public Broadcasting Service, 28 Mar. 2019, www.pbs.org/wgbh/nova/article/radical-ideas-social-media-algorithms/. 

4. “'Smart' Home Devices Used as Weapons in Website Attack.” BBC News, BBC, 22 Oct. 2016, www.bbc.com/news/technology-37738823.

7 Timeless Principles We Can Learn from WD-40

7 Timeless Principles We Can Learn from WD-40

Good Design: Chaco Sandals

Good Design: Chaco Sandals