Interestingly, Yudkowsky claims to have tested this idea by role-playing the part of supersmart AI, with a lesser mortal playing the jailer. Human-level or better AIs could not be imprisoned or threatened with having their plugs pulled because they would talk their way out of the situation.AIs and humans would want them to improve themselves.An AI-based singularity will occur soon."AI foom," short for "recursively self-improving Artificial Intelligence engendered singularity," comes from these ideas: Such an AI won't kill us, inadvertently or otherwise. Believing AI is imminent, Yudkowsky's taken it upon himself to create a Friendly AI (FAI). Yudkowsky believes he has identified a "big problem in AI research" in that we can't assume an AI would care about humans or ethics without our evolutionary history. Eliezer Yudkowsky on how to save the human race. If the thing that you're best at is investment banking, then work for Wall Street and transfer as much money as your mind and will permit to the Singularity Institute where will be used by other people. “ ”Find whatever you're best at if that thing that you're best at is inventing new math of artificial intelligence, then come work for the Singularity Institute.
0 Comments
Leave a Reply. |