Summary of our Surveys

  First, we would like to thank everyone who was kind enough to answer our surveys, and if you haven't you can still answer them so please go and do that.

 We've summarized all of the answers to the surveys. We have written what we have found out from both Survey Monkey, and Google Forms below.


     Survey Monkey     

Hi, It's L.S. Verus.

I did the Survey Monkey, I'll summarize the votes down below↓

First, the quiz.

Q1, AI stands for Artificial Intelligence? 100% of people got it right.

Q2, There are many different types of AI. 90% of people got it right.

Most people got all of the answers right.  From this, I think that we can tell that all people know a bit about AI.

Next, I would like to check what people thought, or think about AI right now.

Q3, Is making progress in the AI field a good idea?  93.75% said yes.

Q4, Can AI help people in their daily lives? 93.75% said yes.

Q5, Do you think that AI is dangerous? 87.5% said yes.

Q6, Do you think that AI will take jobs from people? 81.25% said yes.

Q7, Do you think that AI should, make decisions for humanity? 81.25% said no.

This is more interesting than the results above.  People think that making progress in the AI field is a good idea, but mostly none of the people wanted to let AI make decisions for humanity.  Also, some people thought that AI would take jobs from people, which is actually a point that has a lot to do with our NPO and this blog itself.

Finally, I would like to see the written responses that people wrote to answer the following questions.

Q8, How do you think AI can help you? and Q9, What do you think the dangers of AI are?

For the first question, many people answered that it would help people in different ways.  The answers included things such as, helping the elderly,  automating things and doing tasks that are too dangerous for us.  Helping the elderly included two types of responses, one was about replacing people who are too old to work, and the other was just helping in general.  For the answers to the second question, many people were afraid of AI controlling humanity, other people were a little fearful of them being hacked. 



     Google Forms     

For the Google Forms, K.S. Noctua would like to explain this section.

First, let us see how well people did on the test.

The average score for the test was 6.45 points out of 13 points, and the questions that most people got wrong included,

    - 20% of Japanese citizens hold positive views about receiving nursing care from robots. (9/22 people)
    - The number one country leading in AI in terms of the numbers of startups is… (9/22 people)
    - The second leading country in AI in terms of the numbers of startups is… (5/22 people)
    - The third leading country in AI in terms of the numbers of startups is…(7/22 people)
    - Other countries not in the top 3 are left behind because they lack… (multiple choice) (4/21 people)

    →As you can see, people were able to get the simple, basic questions correct. However, questions that got a little more sophisticated were likely to be incorrect. It proves that not many people know much about AI. Only the basic facts.


Next, I would like to look at the written answer questions, and what people's responses to them were. 

Q1, Have you ever had any encounters with AI? If so, what kind?

            1st, Web pages (YouTube, Google)

            2nd, Customer services (Language bots, servers)

            3rd, Siri and help bots

    →What was the most surprising was the fact that some people thought that they have never encountered AI. We can clearly see that some recognize AI as something more complicated than the algorithms used in Google search engines. I found it surprising how people wrote Youtube and Google the most more than Siri. We can see that many people know that Google search engines and YouTube uses AI to prioritize different content on your search result page. Language bots and servers must include the famous Pepper-Kun, Gusto's recent robot-cat server, and many other service robots that talk and move.

Q2, Do you have a favorite AI or AI-Based character (e.g. droids from Star Wars)?  If so, who? 

            There were no overlapping answers to this question. Everyone had different opinions. Notable appearances, were Wall-E, Hal from 2001: A Space Odyssey, and Vivi.

    →I think movies people have watched in their childhood, or even after they grew up deeply affect one's preference for AI-based characters. The age might matter as well.   

Q3, Do you think that AI can help us? If so, how?

Most people said yes. Here are the top three reasons.

            1st, for everyday tasks

            2nd, for small tasks

            3rd, a solution for the aging population

       →Everyday tasks must include from talking to Siri or Alexa, to searching for information on the internet. I believe small tasks can be interpreted in two ways. One, as in for everyday tasks. The other, as in tasks in factories and having AI do certain jobs. The thought of how AI could be a solution for the aging population is something that counts especially for Japan. There are news articles saying that some people prefer to be cared for by robots so that they do not feel any physical and psychological burden towards the people taking care of them.

Q4, Do you think that AI should make decisions for humanity? Why or why not?

Most people said no. Here are the top three reasons.

            1st, Humanity will lose its power

            2nd, AI cannot make ethical decisions

            3rd, AI is dangerous

    →Most people believe that AI making decisions for humans will equal mankind losing their powers and humanity being dominated by AI. A lack of trust towards AI can also be seen in this result. AI is still in its stage where it is still not fully functional in some scenes. It is highly imaginable that people do not trust it so easily.

Q5, What do you think are the dangers of AI?

Here are the top three answers.

             1st, No ethical decision making

             2nd, External factors

             3rd, Biases (from the maker, or the programming)

    →This was an outcome we could expect. AI does not have any morals or the ability to make ethical decisions. I think biases are an interesting response. Depending on the programmer, AI might have different criteria for judging. 

Q6, Do you think that AI can have goals? Why or why not?

            The answers were 50/50. People who said 'no' said the reason was no sentience, and those who said yes said that the goals would be set by programming.

    →I think this is a difficult topic. In one article, I remember reading that a missile has a goal (to reach a certain place that the programmers specified and blow up that place). ICBMs have AI in them as well. They will fly until they either reach their target or face some sort of problem. In this way, we could say that AI can have goals. Yet, I understand that some people see the concept of having goals as having emotion towards a certain task and waiting to finish or achieve that task. The way one interprets 'goals' may be different in this case.

Q7, Do you think that AI will be able to control humans? Why or why not?

            Two more people answered yes than no, and two said that maybe in the future.  The reasons included things such as 'they already are controlling humans', 'computers will be able to control us', and some people said that 'people would intentionally program AI to control humans'.

    →I wonder what some meant by how AI is already controlling humans. Is it the ads that pop up on our screens when we are surfing the web? Or is it how the recommendations page on YouTube is very addicting? Algorithms may be actually controlling us. 

Q8, Do you know what 'beneficial AI' is? If not, what do you think it is? 

            90% of the people answered no. Many people guessed that it was AI that humans can benefit from.

    →Of course, not many know about beneficial AI. I did not know about it either until I started my research on AI. This is one of the main reasons and goals of our NPO. Beneficial AI will be the key to a safe world with AI. We need people to know the dangers of AI unless not taken seriously in terms of safety.

Q9, Do you like the concept of AI?

            The majority of people said that they liked the concept of AI.

    →This was a positive result for us. It is good that people do not see AI as a human-wrecking object but rather as something that can enhance our lives. 

Q10, What is your opinion on AI as a whole?

            1st, Not bad if we use it right

            2nd, Good

            3rd, Good and Bad, there are pros and cons

    →I was glad that people know some of the dangers of AI and see AI as something that we should learn to use in an appropriate way. We hope to teach that to our audience with our NPO.

Q11, Do you have any thoughts on AI that you haven't written down?

            Many people said that ethics should be considered more when dealing with AI, and any sentient forms of AI would be bad. Finally, a few people wrote that limited growth would also be a good idea.   

    →Ethics will be hard to incorporate into AI. AI is unlikely to gain emotions, and ethics derive from human emotions. We built our moral principles based on what we 'felt' was good and bad, so incorporating that 'feeling' of right and wrongdoing will long be a challenge for scientists. 



Our analysis

  When looking at the answers to the quizzes, we both felt that the majority of people had some general knowledge about AI. From this, we could find what kind of people we would want to educate with our NPO, and what kind of knowledge people needed to know about. 

We also realized that people were afraid/concerned about AI taking over jobs, this is also another area in which we felt we could educate people with our NPO. We thought that teaching people the necessary skills to not let AI take over their job, and what kinds of jobs will be created as AI is included in the workforce, would be a good thing to look into when creating our NPO.  

Another thing that came up more than we expected was AI controlling humans.  When creating our NPO, we would like to look into the chances of AI controlling us, how it could start, and what to look out for.

    Thank you for participating in our surveys, from these we have managed to get a general idea of how we will create an NPO about AI.  

    We would like to ask anybody who hasn't answered to go and do that please, and again, thank you to everybody who has been kind enough to answer our surveys.

    Thank you for your time!

Comments

  1. You analyzed your survey results well and what you took away from them to inform the creation of your NGO was impressive.

    ReplyDelete

Post a Comment

Popular Posts