•  

    Perhaps I should first state that I have an abiding interest in this subject since I was working in research in AI as a grad student at UCSD in the early 1970 (!!). We were working of things at the time which have not seen the light of day yet, in Semantic Networks (modeling human memory processes), but while one can chat about that, it seems to me that today there are huge issues with technical STEMs education which are really really going in the wrong direction, today.

    At this time only between 5 and 7% of students study STEMs subjects, while in China the percentage is 30 (in Israel it is reported to be 28%, probably accounting for their disproportionate presence in start-ups in the sector). We, in the West, do not seem to value STEMs subjects as the US did during the period which led to the collapse of the USSR from their recognition of the lack of capacity to innovate. We are risking the same finality.

    It seems to me that AI, and AGI, require, for their socially and economically useful expression to be more than just an easy way to write an essay, or to access the information soup we are all swimming in, a science based society or at least a society which is as open to science as it is to woke based cults. STEMs education needs to be prioritised. It is only at that cost (and there are costs involved) that the emergence of AGI will be the strong positive that your authors suggest it might be.

    A new reply to this comment has been posted. 
  •  

    This is due to their perception of human rights. How are they going to appreciate and value something that they haven't been in contact with since the origins of their companies or individual perceptions of reality?

    A new reply to this comment has been posted. 
  •  

    I think the world needs Pro-Human human beings. A significant (powerful) fraction of human beings consider other human beings to be disposable animals. This inmoral behavior cannot be fixed by AI Agendas. It is a fundamental human nature issue. Nothing to do with algorithms or machines.

    A new reply to this comment has been posted. 
  •  

    Tech leaders do not underappreciate human talent, they only underappreciate everyone else's talent. Musk believes he is the only truly essential human being.

    A new reply to this comment has been posted. 
  •  

    The real point of this article is that the natural objective of capitalism isn't competition, but monopolization.

    A new reply to this comment has been posted. 
  •  

    Digital is a code, per se similar to the alphabetic one.
    Mass literacy is a recent achievement, some thousand years after writing; digital runs faster, is also more powerful and much more energy consuming.
    AI makes easy export from us functions thought to be exclusively human (I suggest reading Plato’s Phaedrus).
    I agree with the beginning of the article: What remains to be seen is how quickly things will change and for whose benefit.
    I also agree with the conclusion: we need a ... pro-human agenda for AI, I would say we need humanware.
    AI is NOT ONLY an information technology: it's going to rapidly become a a substitute for language, interposing itself in every communication anc also directing meaning towards "where AI ​​wants" …

    A new reply to this comment has been posted. 
  •  

    What if a continuously renewed and very diversified board of taxpaying citizens teamed up with AI e.g., ChatGPT, in order to, in real time, measure the effectiveness of the government? Such transparency should add a great new significance to democracy.

    A new reply to this comment has been posted.