How can we be assured that we don’t develop artificial intelligence to the point that some independent entity decides that we humans are THE problem and presses delete?
AI engineers might be doing what engineers do, working in a reductionist way on a discrete part of a problem with the ethical contexts left to someone else. The commercial drive to develop better and better AI should make us all nervous unless we have absolute transparency, ‘commercial-in-confidence’ is not helpful here.