Loading ...

Elon Musk Said: Artificial Intelligence Would Be 'Seriously Dangerous' by 2019. Reality or Illusion? - The Next Tech

Elon Musk Said: Artificial Intelligence Would Be ‘Seriously Dangerous’ by 2019. Reality or Illusion?

Amelia
by Amelia Scott — 6 months ago in Artificial Intelligence 3 min. read
1616

“The speed of advancement in artificial intelligence development is unbelievably rapid. The probability of something seriously harmful occurring is at the timeframe if we considered approx 10 years”. “I don’t have any idea why just Musk created this article and then chose to delete it”.

The first is AI, which can be an AI designed to do something, like keep spam from your inbox. The next is AGI, Artificial General Intelligence, which can be an AI that’s as creative and smart as me and you. AGI does not exist, and nobody has shown they understand how to make it. However, there are many folks working on the issue.

So, Just how far off are we from using this sort of AI?

Estimates differ from five to 500 decades, and that I find myself on the top end of the range. Whether you’re concerned about AGI or eagerly anticipating it, I feel the sort of tech we watch in the films is a century or more off.

Also read: How Artificial Intelligence is Changing Cybersecurity

Quite a few explanations. To begin with, we do not understand how our brains operate. Nobody knows, for example, the way the colour of your very first bicycle is saved in the mind. We do not even understand how the easiest brains operate. This pig has 302 neurons within its own brain and committed scientists have spent years only hoping to simulate that tiny mind in a pc and even to this day there’s speculation that the job can’t be carried out.

Beyond not understanding how the mind functions, we do not understand how the brain functions. The brain is the inexplicable stuff the mind does. As an example, your liver does not have a sense of humour, however, your mind does. No single neuron in mind is imaginative, and you are.

Where does this come from?

We do not understand. Then there’s the issue of consciousness. How that only matter encounters the planet is a scientific matter we do not even know how to present, let alone answer, and it might be that comprehension is a necessity for the type of intelligence we have.

Where do feelings come from?

Are we able to picture things?

AI because we practice, it is really a very simple technology. The concept goes like this: Take a lot of information about the past, examine it and make predictions regarding the future. That is it. It’s a major deal, to be certain since it’s the start of us forming a planet-wide collective memory. For the very first time, everybody is able to learn from the experience of everybody else using AI to examine all of the information we now collect. It’s a strong technology in that respect, but a straightforward one to understand. The thought that somehow this very simple technique generates something as smart and flexible as a person is, in my mind, fetched, or in the least, unproven.

So, why is it that people believe that is something we’re on the cusp(the pointed end where two curves meet). The first is that films and TV we view. This appears to be a fun day. Those pictures are so realistic and probable that more than most of us do something known as “rationale from literary evidence.” The pictures in these movies bleed in reality.

Also read: Stanford's Develop new AI will help Doctors Spot Brain Aneurysms

However, the second reason why I believe a few people today expect an AGI shortly is it is the logical conclusion of one core belief that individuals are machines. This reductionist perspective of people says that because we’re machines, it’s unavoidable that we’re able to artificial intelligence development like human And after we construct that, it is going to improve quickly and shortly exceed us.

Conclusion

I’ve discovered in my study that an almost universal endorsement of this “Humans – machines” thought from men and women in the tech business, and also an almost universal rejection of this out of everybody else. You do not need to appeal to a sort of spiritualism to maintain this latter perspective.

Humans might just be a sort of development or quantum phenomena or something else which cannot be replicated at a factory. We surely have skills that we don’t know — and the concept that we may replicate those skills in silicon remains unproven now.

Amelia
Amelia Scott

Amelia is a content manager of The Next Tech. She also includes the characteristics of her log in a fun way so readers will know what to expect from her work.

How Artificial Intelligence Improve your Busi...
Monday, 02 September 2019
Largest IOT Rules IN 2019
Friday, 10 May 2019
Should You think about AI-proofing about Your...
Sunday, 15 September 2019

Leave a Reply

avatar
Notify of

Most Popular

Never miss out

Sign up with TNT and get direct story to your inbox.
Facebook Pagelike Widget

Like us on Facebook

Twitter Feeds

The Rise of Machine Learning

Read post: – https://t.co/FfTcZ79jrd

#Rise #MachineLearning #Tech #TNT #TheNextTech

How AI and Data Analytics will help Predict Battery Life and its Expansion

Read post: – https://t.co/nwdkqNl4cC

#AI #Data #Analytics #TNT #TheNextTech

How AI is Learning to Play with Words

Read post: – https://t.co/UgfZaPeHsR

#AI #Words #Tech #TNT #TheNextTech

Libra will be Interoperable. Facebook should be too

Read post: – https://t.co/w1O0AqdW8Z

#Libra #Facebook #Cryptocurrency #TNT #TheNextTech

How to be a Pro at Day Trading

Read post: – https://t.co/QsUK9h2xqs

@daytraderarch #businessIdea #Trade #TNT #TheNextTech

Load More…

Copyright © 2018-2019 The Next Tech. All Rights Reserved.