My eldest grandson, Chris, is now studying for his MSc in Physics and living just down the road from us. So, on a Sunday evening, he pops over and we get a takeout. While he’s here, we chat about our shared belief in the current theory that the universe is made up of tiny bits of string and other matters.
Well, last night, we got onto artificial intelligence and the benefits (largely, but not solely, Chris) and the concerns (largely, but not solely, me) that the development of this branch of science might bring. Along the way I said that, although science might come up a great many ideas and their practical application, it was then up to society, as a whole, to decide whether any particular developments might then be continued, or not (and, yes, I know how idealistic that particular notion is). I cited, for example, nuclear research, something that had led to the creation nuclear weapons; the former being amazing and the latter, less so. We then got into the finer detail in regard to AI. We discussed computer generated music and whether technology could ever have written the music of Bob Dylan, for example. Can, in fact, a robot develop the idiosyncrasies of a human mind and what would the world be like if they did so? If I listened as much as I talked, I think Chris felt that they could.
So, my point is? Well I woke up this morning with a thought in my head. If they did and we continued to develop such entities, they could become complete replicas of humans with all the foibles of the latter. In which case, what would be the point of developing them in the first place when we have the original versions already. More importantly, what would be the outcome for humanity if we did? Food for thought there, for sure.