Anna Pastak // November 01 2018

Heading towards a future of Terminators?

This week we are exploring the possibilities and threats of AI which is a very current phenomena that is everywhere. We’re living a time of digitalization and more and more people know what AI is abbreviation of. Artificial Intelligence is everywhere from our phones to the softwares multitude of companies around us use.

If we want to simplify AI, we could say it will develop in three stages. Currently we are in the first stage of the development, where we are having a so-called narrow artificial intelligence. This description applies to computers and smartphones that can handle complex and massive loads of information and data and execute simple tasks according to those. This is where we already are. The next step is General AI, which sounds like an army commander, but is actually much more scary than that to many people. It is a stage of development of AI where computers can basically do any inference better than a human being. We are not quite there yet, but this stage isn’t far away. According to some enlightened guestimations, this is about few decades away.

The third stage is what is called (with a scared whisper mostly) Singularity, a so-called Super-AI, which basically means that a superhuman artificial intelligence will accelerate the technological and social change of human race eventually leading to destruction of the planet or at least the rule of machines. The scientists are still in disagreement of whether Super-AI will ever be possible to achieve. Simply put, Super-AI will do everything better than we, slow and stupid humans. It will create and give birth to little not-so-cute AI-babies and basically make us humans slaves or extinct. We all have seen enough movies on the subject, I guess everyone knows this track of thought.

There it goes, the short future of human being with AI. Considering we still have a lot of people in high places who are not even fluent in using their smartphones, should we really play with AI and the threats it can bring?  Or should we all start listening to the Rage against the Machine (literally, not figuratively) and stop any and all AI development to escape a future that James Cameron predicted back in 1984. Where is our John Connor who will stop all this before it begins?

Or is it all that simple after all?

Now this is coming from someone who can barely understand the AI in my new smartphone, but I do think the threats are a bit exaggerated and I can’t help to think that a lot of the questions arisen from AI come from fear. It’s human nature. We are afraid of what we can’t fully understand. We might turn to it in desperation, but in a placid moment we start to look at it from all sides, and we start painting pictures of dangers in more bright red than the fake-blood on All Hallow’s Eve.

But if even someone like me can look past the fear and try to understand, I’m sure we all can do so. So, this week together with Digitalist Resident AI expert Yiannis Maglaras we will look deeper into the threats and possibilities of AI.

I guess the first thing people think of, is what is going to happen to our jobs? Will robots take over our jobs, and we will be moneyless, homeless and devastated while the Terminators rule the World? Looking at the self-service tills at the Supermarket makes me think whatever happened to those nice people who used to beep my groceries and ask for my loyalty card? On the other hand I absolutely love my car that has Cruise Control that monitors the distance to the previous car. It makes me eager to have a self-driving car! Imagine the possibilities; I can put my morning make-up in the car while it takes me wherever I need to be. And the car will do it safer and better than I would ever, as I will always have that human factor, which makes me prone to errors. We won’t be needing any drivers after that though, so what will they do? Taxi drivers, truck drivers, and so on? 

Even with my less-than-average understanding of AI, I’m not really afraid of machines taking over our jobs, but actually restructuring our work life and bringing a lot of new jobs while taking away some other. I dream of a future, where human labour is more creative and takes less time from our day, leading to a future with more time to do sports, arts etc. and spend more times with the families while machines do some of the manual labour. God knows I would prefer to learn Spanish or become a true cake-baking aficionado instead of doing my book-keeping.

What about if the machines don’t learn properly? Like when my cleaner robot will throw in the trash my grandmas ring I inherited after my toddler throws it on the floor? Or fluff my little dog like it’s some sort of a plush toy?
Well, if this is an actual problem, a robot not separating trash from family treasures, I guess that whole Machine Society ain’t happening anytime soon. And this is why we most likely won’t be losing all those jobs, but just get new ones. We need a complex human mind to create, eliminate errors and fix machines.

But wait! Let’s go back to that self-driving car. How will it do the risk assessment and not kill someone through that process? If someone crosses the street suddenly (a drunk, a kid or a cat, etc.) will it stop or drive over? If the option is the driver crashing into a pole or a wall or killing another person? What would the machine do? What would I want it to do? Definitely save the kid, but how about my life or a random drunk persons life? And who will carry the responsibility for a possible run down, if the machine chooses me? This brings it all back to the question that is one of the most common ones in discussion about AI. A machine will never have the complex emotional toolkit that a human being does. It will not feel love or sadness, it will not have morals and it will not understand the human factor. And do I really want to drive a car or do anything else that might have a risk attached to it according to someone else’s morals? Whose morals represent the absolute truth according to which we should program the machines? And how do religions and cultural differences play into all of this? 

Will all of this just lead to a disaster? 

On the other hand, it will not feel hate and racism, unless these thoughts are programmed into it. And so many humane feelings cause so much harm, wars and threats, maybe some inhumanity is only good?

I sincerely don’t know.

What I know though, is that I’m not afraid of AI. I’m cautiously curious. I’m trusting that AI development will be regulated and supervised by people who want to see a brighter future for the humankind. I’m an eternal optimist. And honestly? I love that I don’t have to keep my texts to 160 letters and that my phone actually predicts right words when I’m typing to my colleague while being pulled to organize Legos by two very demanding small hands. Or when I can unwind and just drive, while the navigation system makes sure I get where I’m going. I can listen to my Spotify weekly suggestions based on my music taste and find new songs I like. All the while my grass-cutting robot takes care of our lawn making sure I (or let’s be honest, the husband) don’t need to concern ourselves with our yard looking decent. Gone are the days of rolling my compact casettes to the beginning with a pencil (I guess the latter will stop existing soon enough too), trying to find locations on paper maps and playing Snake on my Nokia.

It’s the most cool thing to be a person born in the late 80’s. I still have the Mötley Crüe and 80’s glam rock bands in my life, but I also know of a thing called text prediction. I’ve lived the moments of getting our very first color TV, but I also have lived to see my 18-month old pick his favourite cartoon on the iPad’s Youtube App. I remember Agricultural Revolution from my school history classes. Now I get to be part of the Digital Revolution.

I am one of the generation who has seen this amazing leap to digital era. Let’s use our knowledge to good, take care of our planet and make sure, that the generations in some centuries and decades will look back at us in awe.

Read more tomorrow from Yiannis to see what someone fluent in AI thinks of all of this! Should we be afraid, or should we be curious? Stay tuned!

More from Anna Pastak