What is Beneficial AI?

     In our presentation, we explained briefly what beneficial AI was. But what is beneficial AI exactly? I would like to look into this topic a bit further in this entry. 

    Basically, beneficial AI ensures that we use AI safely and only gain benefits. It is considered the "best way we can safeguard the long-term future". Although AI is considered one of the most powerful and important technology of humanity, this does not mean that it is made to be safe and for beneficial purposes only. It is humanities job to make sure that AI is built safe and used only for beneficial purposes. So, how do we accomplish this? First, let's look at why beneficial AI is so important.

    

    Open AI has been training AI so that it will achieve its goals they way we want them to. One way they train AI is by games. One of the games they train on called CoastRunners requires the player to finish the race as fast as possible and ahead of other players. In this game, you get points by hitting targets that are laid out along the route and not by the player's progression around the course. OpenAI rewarded the AI for increasing its points until they faced a misaligned high score. Apparently, the layout of the targets which you can gain points from were laid out ina way where the AI realized that it can get higher points repeatedly hitting the targets instead of crossing the finishing line. See the video of OpenAI's AI odd behavior here or here↓.



    These kinds of faulty functions happened in real life situations as well. When Amazon was trying to use AI to screen resumes to increase the fairness of who they were hiring, the AI ended up being biased against women. It disallowed words that incuded words such as "women's" and "netball", while favoring masculine words such as "executed" and "captured". Although these programmings were not intended, women must have felt uneasy to hear this news. 


    So, what is the importance of beneficial AI? Well, it is similar to AI safety, but here is how Giving What We Can saids,

"We think ensuring beneficial AI is important for three reasons:

  1. AI is a technology that is likely to cause a transformative change in our society --- and poses some risk of ending it.
  2. Relative to the enormous scale of this risk, not enough work is being done to ensure AI is developed safely and in the interests of everyone.
  3. There are things we can do today to make it more likely that AI is beneficial."

    As we saw with the two examples of faulty AI, it is important that we ensure that this does not happen in real life when AI is more imbedded in our lives such as in scenes where AI would make decisions for us. No one would want AI to be racist and discriminate certain people in their resumes. 

 

   The Institution of Telecommunication Union (ITU) states that the problem we are currently facing in terms of creating beneficial AI is how humans cannot specify obejctives perfectly, which is actually needed for AI to operate properly and precisely. According to Stuart Russell, professor at UC Berkeley, "when we talk about 'AI for good', we do not know how to define 'good' as a fixed objective in the real world that could be supplied to standard model AI systems." He states that we do not know how to specify objects correctly and precisely, which we can assume since we view things differently and there is never a correct answer for almost anything. Yet when it comes to AI, a simple nuance of the objective will not work. Pursuing explicitly defined goals and objectives will be a very important factor for future AI to function properly.

    ITU and Russell both agree that we design AI to become an assistant rather than a decision-maker. If AI will turn to become something that helps us, then we will be able to use AI more freely without us having to compete with AI in intelligence. AI as an assistant will also help us solve problems and transform AI into a service for problem solving. I believe this sounds adequate and I agree with this goal. Then again, there will be power-hungry seekers who would try to use AI in a bad way so we do need to be careful in those terms as well. 


     I hope you were able to get a further idea of what beneficial AI was! I really hope AI will not become decision-making machines since if they truly gain that power, we living creatures would lose part of our uniqueness.

    This was the blog to question #6 of our 30 questions.



References:

Clark, J. (2019, March 7). Faulty Reward Functions in the Wild. OpenAI. Retrieved July 26, 2022, from https://openai.com/blog/faulty-reward-functions/

Giving What We Can. (2021). Beneficial artificial intelligence. Centre for Effective Altruism. Retrieved July 26, 2022, from https://www.givingwhatwecan.org/cause-areas/long-term-future/artificial-intelligence

ITU News. (2021, November 30). Is ‘provably beneficial’ AI possible? ITU Hub. Retrieved July 26, 2022, from https://www.itu.int/hub/2020/09/is-provably-beneficial-ai-possible/

 

Comments

  1. I thought this was a Very informative post. i literally did not know ANYTHING about beneficial AI, so in a way this post was pretty helpful! AI is such a mystery world to me, and to see that there are so many categories and levels to them was interesting to learn. The blog itself was formatted really well, and i LOVED how you included a video in it. it was a pretty interesting watch! i really do hope AI develops for the good in the future, and that we get to see more of this Beneficial AI in the coming future!

    ReplyDelete
  2. I agree that this was an interesting post but the examples showed more what is _not_ Beneficial AI than what it is. Seeing the game example that involved faulty instructions, I was reminded how humans (including students in the IE Program) will find ways of "gaming" a system to their advantage when they find gaps in a task or instructions that can be exploited. For example, a small minority of IE students using Xreading (an online program for extensive reading) find ways to get a high "words read" count without actually reading the digital books. What the AI bots were doing in the game to make maximum points without actually going over the finish line, is similar to the way some IE students "game the system." The fact that few students (I hope!) game the system in this way shows that the will to improve one's English supersedes the temptation to cheat or to take shortcuts that don't result in learning gains. For AI bots it may be harder to program in motivations that involve self-improvement or fairness.

    ReplyDelete

Post a Comment

Popular Posts