Plumbing Deep Learning in the Shallows

Your Next Pet, or Its?

• Machine Interaction, Neural Net, Psychology, Artificial Intelligence, and Finnegan

I’ll preface this with the fact that I am not a psychologist, I haven’t studied it since college. I’m not even a scientist. So what follows should be treated as nothing more than anecdotal. That being said, it seems fascinating, so if anyone out there feels the need to write a doctoral thesis on this please let me know how it turns out.

My simple neural net was a puppy.

In the past year there has been plenty of bluster over the dangers of Artificial Intelligence, the rise of our robot overlords, the ever-present threat of the plague of grey-goo. There has also been a strong effort to counter those dangers in Elon Musk’s OpenAI. There has been comparatively little talk about the potential benefits of AI. Or even what the differences are between AI and ML (if, in fact, there is a difference). But amidst all of the noise, I noticed something interesting in my little backwater of this tumult of research.

Finnegan is a simple feed-forward, fully connected neural net I made and threw the MNIST handwritten digits dataset at. I then built a webapp to let a user draw a digit and see if the net could correctly classify it.

The first results were, while better than average, no better than 50/50. (I had a poor architecture and an over-fit model, but that is beside the point.) But as people tried it, something funny happened. Initial reactions to the project were all in line doom/gloom/paranoia from the media, sometimes facetiously, sometimes seriously. But their interactions belied those sentiments almost universally. They treated the webapp like a puppy, or a small child. Most people would express excitement when it guessed correctly. And when it guessed wrong, they would be upset and usually blame themselves for poor handwriting. And then they often retried the same digit, more methodically, as if trying to coax it toward a correct answer.

Now any number of reasons, could explain this, not the least of which could be the fact I was right there and perhaps they didn’t want to see me (in my endeavour) fail. But whatever the reason, humans were interacting with a machine more in terms of a pet or a small child, rather than a tool, or worse, a potential threat.

Maybe the future need not be so bleak after all.

comments powered by Disqus