We all know the trope: a machine grows so intelligent that its apparent consciousness becomes indistinguishable from our own, and then it surpasses us β and possibly even turns against us. As investment pours into efforts to make such technology β so-called artificial general intelligence (AGI) β a reality, how scared of such scenarios should we be?
We all know the trope: a machine grows so intelligent that its apparent consciousness becomes indistinguishable from our own, and then it surpasses us β and possibly even turns against us. As investment pours into efforts to make such technology β so-called artificial general intelligence (AGI) β a reality, how scared of such scenarios should we be?