Twitter is a Private Company

here's the genius GG thinks is worth promoting


drummerboy said:

paulsurovell said:

nohero said:

Damn, now it's woke A.I. that Elon has us worried about.

It's AI in general.

Elon, along with many other experts, has been warning about it for years.

https://www.washingtonpost.com/news/innovations/wp/2017/08/21/elon-musk-calls-for-ban-on-killer-robots-before-weapons-of-terror-are-unleashed/

Elon Musk calls for ban on killer robots before ‘weapons of terror’ are unleashed

By Peter Holley
August 21, 2017 at 4:30 p.m. EDT

Tesla  chief executive Elon Musk has said that artificial intelligence is more  of a risk to the world than is North Korea, offering humanity a stark warning about the perilous rise of autonomous machines.

Now the tech billionaire has joined more than 100 robotics and artificial intelligence experts calling on the United Nations to ban one of the deadliest forms of such machines: autonomous weapons.

“Lethal autonomous weapons threaten to become the third revolution in warfare,” Musk and 115 other experts, including Alphabet’s artificial intelligence expert, Mustafa Suleyman, warned in an open letter released Monday. “Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at time scales faster than humans can comprehend.”

According to the letter, “These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”

The letter — which included signatories from dozens of organizations in nearly 30 countries, including China, Israel, Russia, Britain, South Korea and France — is addressed to the U.N. Convention on Certain Conventional Weapons, whose purpose is restricting weapons “considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately,” according to the U.N. Office for Disarmament Affairs. It was released at an artificial intelligence conference in Melbourne, Australia, ahead of formal U.N. discussions on autonomous weapons. Signatories implored U.N. leaders to work hard to prevent an autonomous weapons “arms race” and “avoid the destabilizing effects” of the emerging technology.

In a report released this summer, Izumi Nakamitsu, the head of the disarmament affairs office, said that technology is advancing rapidly but that regulation has not kept pace. She pointed out that some of the world’s military hot spots already have  intelligent machines in place, such as “guard robots” in the demilitarized zone between South and North Korea.

For example, the South Korean military is using a surveillance tool called the SGR-AI, which can detect, track and fire upon intruders. The robot was implemented to reduce the strain on thousands of human guards who man the heavily fortified, 160-mile border. While it does not operate autonomously yet, it does have the capability to, according to Nakamitsu.

“The system can be installed not only on national borders, but also in critical locations, such as airports, power plants, oil storage bases and military bases,” says a description in a video released by Samsung, which makes the SGR-AI.

“There are currently no multilateral standards or regulations covering military AI applications,” Nakamitsu wrote. “Without wanting to sound alarmist, there is a very real danger that without prompt action, technological innovation will outpace civilian oversight in this space.”

According to Human Rights Watch, autonomous weapons systems are being developed in many of the nations represented in the letter — “particularly the United States, China, Israel, South Korea, Russia and the United Kingdom.” The concern, the organization says, is that people will become less involved in the process of selecting and firing on targets as machines lacking human judgment begin to play a critical role in warfare. Autonomous weapons “cross a moral threshold,” HRW says.

“The  humanitarian and security risks would outweigh any possible military benefit,” HRW argues. “Critics dismissing these concerns depend on speculative arguments about the future of technology and the false presumption that technical advances can address the many dangers posed by these future weapons.”

In recent years, Musk’s warnings about the risks posed by AI have grown increasingly strident — drawing pushback in July from Facebook chief executive Mark Zuckerberg, who called Musk’s dark predictions “pretty irresponsible.” Responding to Zuckerberg, Musk said his fellow billionaire’s understanding of the threat post by artificial intelligence “is limited.”

Last month, Musk told a group of governors that they need to start regulating artificial intelligence, which he called a “fundamental risk to the existence of human civilization.” When pressed for concrete guidance, Musk said the government must get a better understanding of AI before it’s too late.

“Once  there is awareness, people will be extremely afraid, as they should be,” Musk said. “AI is a fundamental risk to the future of human civilization in a way that car accidents, airplane crashes, faulty drugs  or bad food were not. They were harmful to a set of individuals in society, but they were not harmful to individuals as a whole.”

Click to Read More
By Peter Holley
August 21, 2017 at 4:30 p.m. EDT

Tesla  chief executive Elon Musk has said that artificial intelligence is more  of a risk to the world than is North Korea, offering humanity a stark warning about the perilous rise of autonomous machines.

Now the tech billionaire has joined more than 100 robotics and artificial intelligence experts calling on the United Nations to ban one of the deadliest forms of such machines: autonomous weapons.

“Lethal autonomous weapons threaten to become the third revolution in warfare,” Musk and 115 other experts, including Alphabet’s artificial intelligence expert, Mustafa Suleyman, warned in an open letter released Monday. “Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at time scales faster than humans can comprehend.”

According to the letter, “These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”

The letter — which included signatories from dozens of organizations in nearly 30 countries, including China, Israel, Russia, Britain, South Korea and France — is addressed to the U.N. Convention on Certain Conventional Weapons, whose purpose is restricting weapons “considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately,” according to the U.N. Office for Disarmament Affairs. It was released at an artificial intelligence conference in Melbourne, Australia, ahead of formal U.N. discussions on autonomous weapons. Signatories implored U.N. leaders to work hard to prevent an autonomous weapons “arms race” and “avoid the destabilizing effects” of the emerging technology.

In a report released this summer, Izumi Nakamitsu, the head of the disarmament affairs office, said that technology is advancing rapidly but that regulation has not kept pace. She pointed out that some of the world’s military hot spots already have  intelligent machines in place, such as “guard robots” in the demilitarized zone between South and North Korea.

For example, the South Korean military is using a surveillance tool called the SGR-AI, which can detect, track and fire upon intruders. The robot was implemented to reduce the strain on thousands of human guards who man the heavily fortified, 160-mile border. While it does not operate autonomously yet, it does have the capability to, according to Nakamitsu.

“The system can be installed not only on national borders, but also in critical locations, such as airports, power plants, oil storage bases and military bases,” says a description in a video released by Samsung, which makes the SGR-AI.

“There are currently no multilateral standards or regulations covering military AI applications,” Nakamitsu wrote. “Without wanting to sound alarmist, there is a very real danger that without prompt action, technological innovation will outpace civilian oversight in this space.”

According to Human Rights Watch, autonomous weapons systems are being developed in many of the nations represented in the letter — “particularly the United States, China, Israel, South Korea, Russia and the United Kingdom.” The concern, the organization says, is that people will become less involved in the process of selecting and firing on targets as machines lacking human judgment begin to play a critical role in warfare. Autonomous weapons “cross a moral threshold,” HRW says.

“The  humanitarian and security risks would outweigh any possible military benefit,” HRW argues. “Critics dismissing these concerns depend on speculative arguments about the future of technology and the false presumption that technical advances can address the many dangers posed by these future weapons.”

In recent years, Musk’s warnings about the risks posed by AI have grown increasingly strident — drawing pushback in July from Facebook chief executive Mark Zuckerberg, who called Musk’s dark predictions “pretty irresponsible.” Responding to Zuckerberg, Musk said his fellow billionaire’s understanding of the threat post by artificial intelligence “is limited.”

Last month, Musk told a group of governors that they need to start regulating artificial intelligence, which he called a “fundamental risk to the existence of human civilization.” When pressed for concrete guidance, Musk said the government must get a better understanding of AI before it’s too late.

“Once  there is awareness, people will be extremely afraid, as they should be,” Musk said. “AI is a fundamental risk to the future of human civilization in a way that car accidents, airplane crashes, faulty drugs  or bad food were not. They were harmful to a set of individuals in society, but they were not harmful to individuals as a whole.”

do you have the slightest clue that Musk/Tesla is a major developer of AI for it's "self-driving" cars?

how do you reconcile that?

so embarrassing. all you do is dig holes for yourself.

C'mon, man.

The subject of the article is Musk's signing on to a letter warning about the risks of AI applied to weaponry. The other 100 or so signatories are also developers of AI, including Google AI expert Mustafa Suleyman.

Similar to nuclear experts engaged in developing peaceful nuclear energy who warn about the risks of nuclear weapons.

Here's the text of the letter and the signatories (linked in the WaPo article)
https://www.cse.unsw.edu.au/~tw/ciair//open.pdf

An Open Letter to the United Nations Convention on
Certain Conventional Weapons

As companies building the technologies in Artificial Intelligence and Robotics that may be
repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm.
We warmly welcome the decision of the UN’s Conference of the Convention on Certain Conventional
Weapons (CCW) to establish a Group of Governmental Experts (GGE) on Lethal Autonomous
Weapon Systems. Many of our researchers and engineers are eager to offer technical advice to your
deliberations.

We commend the appointment of Ambassador Amandeep Singh Gill of India as chair of the GGE.
We entreat the High Contracting Parties participating in the GGE to work hard at finding means to
prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the
destabilizing effects of these technologies.

We regret that the GGE’s first meeting, which was due to start today, has been cancelled due to a
small number of states failing to pay their financial contributions to the UN. We urge the High
Contracting Parties therefore to double their efforts at the first meeting of the GGE now planned for
November.

Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed,
they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster
than humans can comprehend. These can be weapons of terror, weapons that despots and
terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We
do not have long to act. Once this Pandora’s box is opened, it will be hard to close.
We therefore implore the High Contracting Parties to find a way to protect us all from these
dangers.

FULL LIST OF SIGNATORIES TO THE OPEN LETTER

Tiberio Caetano, founder & Chief Scientist at Ambiata, Australia.
Mark Chatterton and Leo Gui, founders & MD of Ingenious AI, Australia.
Charles Gretton, founder of Hivery, Australia.
Brad Lorge, founder & CEO of Premonition.io, Australia
Brenton O’Brien, founder & CEO of Microbric, Australia.
Samir Sinha, founder & CEO of Robonomics AI, Australia.
Ivan Storr, founder & CEO, Blue Ocean Robotics, Australia.
Peter Turner, founder & MD of Tribotix, Australia.
Yoshua Bengio, founder of Element AI & Montreal Institute for Learning Algorithms, Canada.
Ryan Gariepy, founder & CTO, Clearpath Robotics, Canada.
Geoffrey Hinton, founder of DNNResearch Inc, Canada.
James Chow, founder & CEO of UBTECH Robotics, China.
Robert Li, founder & CEO of Sankobot, China.
Marek Rosa, founder & CEO of GoodAI, Czech Republic.
Søren Tranberg Hansen, founder & CEO of Brainbotics, Denmark.
Markus Järve, founder & CEO of Krakul, Estonia.
Harri Valpola, founder & CTO of ZenRobotics, founder & CEO of Curious AI Company, Finland.
Esben Østergaard, founder & CTO of Universal Robotics, Denmark.
Raul Bravo, founder & CEO of DIBOTICS, France.
Ivan Burdun, founder & President of AIXTREE, France.Raphael Cherrier, founder & CEO of Qucit, France.
Alain Garnier, founder & CEO of ARISEM (acquired by Thales), founder & CEO of Jamespot, France.
Jerome Monceaux, founder & CEO of Spoon.ai, founder & CCO of Aldebaran Robotics, France.
Charles Ollion, founder & Head of Research at Heuritech, France.
Anis Sahbani, founder & CEO of Enova Robotics, France.
Alexandre Vallette, founder of SNIPS & Ants Open Innovation Labs, France.
Marcus Frei, founder & CEO of NEXT.robotics, Germany.
Kristinn Thorisson, founder & Director of Icelandic Institute for Intelligence Machines, Iceland.
Fahad Azad, founder of Robosoft Systems, India.
Debashis Das, Ashish Tupate, Jerwin Prabu, founders (incl. CEO) of Bharati Robotics, India.
Pulkit Gaur, founder & CTO of Gridbots Technologies, India.
Pranay Kishore, founder & CEO of Phi Robotics Research, India.
Shahid Memom, founder & CTO of Vanora Robots, India.
Krishnan Nambiar & Shahid Memon, founders, CEO & CTO of Vanora Robotics, India.
Achu Wilson, founder & CTO of Sastra Robotics, India.
Neill Gernon, founder & MD of Atrovate, founder of Dublin.AI, Ireland.
Parsa Ghaffari, founder & CEO of Aylien, Ireland.
Alan Holland, founder & CEO of Keelvar Systems, Ireland.
Alessandro Prest, founder & CTO of LogoGrab, Ireland.
Frank Reeves, founder & CEO of Avvio, Ireland.
Alessio Bonfietti, founder & CEO of MindIT, Italy.
Angelo Sudano, founder & CTO of ICan Robotics, Italy.
Domenico Talia, founder and R&D Director of DtoK Labs, Italy.
Shigeo Hirose, MIchele Guarnieri, Paulo Debenest, & Nah Kitano, founders, CEO & Directors of
HiBot Corporation, Japan.
Andrejs Vasiljevs, founder and CEO of Tilde, Latvia.
Luis Samahí García González, founder & CEO of QOLbotics, Mexico.
Koen Hindriks & Joachim de Greeff, founders, CEO & COO at Interactive Robotics, the Netherlands.
Maja Rudinac, founder and CEO of Robot Care Systems, the Netherlands.
Jaap van Leeuwen, founder and CEO Blue Ocean Robotics Benelux, the Netherlands.
Rob Brouwer, founder and Director of Operations, Aeronavics, New Zealand.
Philip Solaris, founder and CEO of X-Craft Enterprises, New Zealand.
Dyrkoren Erik, Martin Ludvigsen & Christine Spiten, founders, CEO, CTO & Head of Marketing at
BlueEye Robotics, Norway.
Sergii Kornieiev, founder & CEO of BaltRobotics, Poland.
Igor Kuznetsov, founder & CEO of NaviRobot, Russian Federation.
Aleksey Yuzhakov & Oleg Kivokurtsev, founders, CEO & COO of Promobot, Russian Federation.
Junyang Woon, founder & CEO, Infinium Robotics, former Branch Head & Naval Warfare Operations
Officer, Singapore.
Jasper Horrell, founder of DeepData, South Africa.
Onno Huyser and Mark van Wyk, founders of FlyH2 Aerospace, South Africa.
Toni Ferrate, founder & CEO of RO-BOTICS, Spain.
José Manuel del Río, founder & CEO of Aisoy Robotics, Spain.
Victor Martin, founder & CEO of Macco Robotics, Spain.
Angel Lis Montesinos, founder & CTO of Neuronalbite, Spain.
Timothy Llewellynn, founder & CEO of nViso, Switzerland.
Francesco Mondada, founder of K-Team, Switzerland.
Jurgen Schmidhuber, Faustino Gomez, Jan Koutník, Jonathan Masci & Bas Steunebrink, founders,
President & CEO of Nnaisense, Switzerland.
Satish Ramachandran, founder of AROBOT, United Arab Emirates.
Silas Adekunle, founder & CEO of Reach Robotics, UK.
Steve Allpress, founder & CTO of FiveAI, UK.
John Bishop, founder and Director of Tungsten Centre for Intelligent Data Analytics, UK.Joel Gibbard and Samantha Payne, founders, CEO & COO of Open Bionics, UK.
Richard Greenhill & Rich Walker, founders & MD of Shadow Robot Company, UK.
Nic Greenway, founder of React AI Ltd (Aiseedo), UK.
Demis Hassabis & Mustafa Suleyman, founders, CEO & Head of Applied AI, DeepMind, UK.
Daniel Hulme, founder & CEO of Satalia, UK.
Bradley Kieser, founder & Director of SMS Speedway, UK.
Charlie Muirhead & Tabitha Goldstaub, founders & CEO of CognitionX, UK.
Geoff Pegman, founder & MD of R U Robots, UK.
Donald Szeto, Thomas Stone & Kenneth Chan, founders, CTO, COO & Head of Engineering of
PredictionIO, UK.
Antoine Biondeau, founder & CEO of Sentient Technologies, USA.
Steve Cousins, founder & CEO of Savioke, USA.
Brian Gerkey, founder & CEO of Open Source Robotics, USA.
Ryan Hickman & Soohyun Bae, founders, CEO & CTO of TickTock.AI, USA.
John Hobart, founder & CEO of Coria, USA.
Henry Hu, founder & CEO of Cafe X Technologies, USA.
Zaib Husain, founder & CEO of Makerarm, USA.
Alfonso Íñiguez, founder & CEO of Swarm Technology, USA.
Kris Kitchen, founder & Chief Data Scientist at Qieon Research, USA.
Justin Lane, founder of Prospecture Simulation, USA.
Gary Marcus, founder & CEO of Geometric Intelligence (acquired by Uber), USA.
Brian Mingus, founder & CTO of Latently, USA.
Mohammad Musa, founder & CEO at Deepen AI, USA.
Elon Musk, founder, CEO & CTO of SpaceX, co-founder & CEO of Tesla Motor, USA.
Rosanna Myers & Dan Corkum, founders, CEO & CTO of Carbon Robotics, USA.
Erik Nieves, founder & CEO of PlusOne Robotics, USA.
Steve Omohundro, founder & President of Possibility Research, USA.
Jeff Orkin, founder & CEO, Giant Otter Technologies, USA.
Greg Phillips, founder & CEO, ThinkIt Data Solutions, USA.
Dan Reuter, found & CEO of Electric Movement, USA.
Alberto Rizzoli & Simon Edwardsson, founders & CEO of AIPoly, USA.
Dan Rubins, founder & CEO of Legal Robot, USA.
Stuart Russell, founder & VP of Bayesian Logic Inc., USA.
Andrew Schroeder, founder of WeRobotics, USA.
Stanislav Shalunov, founder & CEO of Clostra, USA
Gabe Sibley & Alex Flint, founders, CEO & CPO of Zippy.ai, USA.
Martin Spencer, founder & CEO of GeckoSystems, USA.
Peter Stone, Mark Ring & Satinder Singh, founders, President/COO, CEO & CTO of Cogitai, USA.
Michael Stuart, founder & CEO of Lucid Holdings, USA.
Madhuri Trivedi, founder & CEO of OrangeHC, USA.
Massimiliano Versace, founder, CEO & President, Neurala Inc, USA.
Reza Zadeh, founder & CEO of Matroid, USA.


drummerboy said:

here's the genius GG thinks is worth promoting

Apart from Greene's support for Kevin McCarthy for House Speaker are there meaningful differences between Greene and Matt Gaetz?


NY Times article on the revised AP African American Studies framework is pilloried by the College Board which contrasts the NYT article with a report by Politico.

Here's the revised College Board framework:
https://apcentral.collegeboard.org/media/pdf/ap-african-american-studies-course-framework.pdf

My guess: The revised framework enables the study of all topics included in the pilot version and will continue to be opposed by DeSantis.


paulsurovell said:

drummerboy said:

here's the genius GG thinks is worth promoting

Apart from Greene's support for Kevin McCarthy for House Speaker are there meaningful differences between Greene and Matt Gaetz?

That's some weak whatabout you got there.

When Ari Melber gives Matt Gaetz a full hour for him to bloviate insanity, give me a call.

And, BTW, Greene is quite different than Gaetz. Besides being publicly a lot more stupid, she's also a lot more incendiary.


paulsurovell said:

Apart from Greene's support for Kevin McCarthy for House Speaker are there meaningful differences between Greene and Matt Gaetz?

Are you contending that the interview of Gaetz on MSNBC was similar in tone and acceptance of his representations, as Greenwald's interview of Marjorie Taylor Greene?

If not, then you didn't give a good counter-example.


paulsurovell said:

drummerboy said:

do you have the slightest clue that Musk/Tesla is a major developer of AI for it's "self-driving" cars?

how do you reconcile that?

so embarrassing. all you do is dig holes for yourself.

C'mon, man.

The subject of the article is Musk's signing on to a letter warning about the risks of AI applied to weaponry. The other 100 or so signatories are also developers of AI, including Google AI expert Mustafa Suleyman.

Similar to nuclear experts engaged in developing peaceful nuclear energy who warn about the risks of nuclear weapons.

You should read the letter. The danger that is warned against is not that A.I. would make "bad decisions" on its own. "Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close."

The better analogy would be if there were nuclear experts warning about the risk of nuclear weapons while working on "peaceful" uses of nuclear explosions. 

By the way, that video was kind of fake-y.

Musk oversaw staged Tesla self-driving video, emails show | Ars Technica


paulsurovell said:

nohero said:

Damn, now it's woke A.I. that Elon has us worried about.

It's AI in general.

Elon, along with many other experts, has been warning about it for years.

https://www.washingtonpost.com/news/innovations/wp/2017/08/21/elon-musk-calls-for-ban-on-killer-robots-before-weapons-of-terror-are-unleashed/

Elon Musk calls for ban on killer robots before ‘weapons of terror’ are unleashed
...
Last month, Musk told a group of governors that they need to start regulating artificial intelligence, which he called a “fundamental risk to the existence of human civilization.” When pressed for concrete guidance, Musk said the government must get a better understanding of AI before it’s too late.

“Once  there is awareness, people will be extremely afraid, as they should be,” Musk said. “AI is a fundamental risk to the future of human civilization in a way that car accidents, airplane crashes, faulty drugs  or bad food were not. They were harmful to a set of individuals in society, but they were not harmful to individuals as a whole.”

Musk's warning about "awareness" (shades of "The Terminator") is different from the normal set of warnings about the risks of relying on A.I.

You'll be happy to know I read a good book about that a couple of years ago, called "Army of None", which goes into much more detail about real dangers vs. those in the popular imagination that movies feature.

In general, raising the subject of A.I. weapons, in response to Musk stoking ignorance about A.I. being used in a different setting, is a ridiculous "whatabout".

Musk relies on his reputation as tech-savvy, so when he amplifies an uninformed right-wing grievance like that it's a problem.


paulsurovell said:

drummerboy said:

paulsurovell said:

nohero said:

Damn, now it's woke A.I. that Elon has us worried about.

It's AI in general.

do you have the slightest clue that Musk/Tesla is a major developer of AI for it's "self-driving" cars?

C'mon, man.

The subject of the article is Musk's signing on to a letter warning about the risks of AI applied to weaponry.

Ah, so Elon doesn’t have us worried about “AI in general.” I’m relieved.


nohero said:

Damn, now it's woke A.I. that Elon has us worried about.

well maybe ChatGPT won't tell you how great TFG is, but it will tell you about the 2022 Mets in Trump's voice. 

Folks, let me tell you, the 2022 New York Mets had one tremendous season, the best you've ever seen. They were winning like you wouldn't believe, and they had the greatest players, the strongest lineup. You know, I've been a fan of the Mets for a long time, and I know a thing or two about winning, and I can tell you, this team had the stuff.But you know, the fake news media, they don't give the Mets the credit they deserve, they don't cover them like they should. But the real fans, they know, the real fans know what's going on, they know that this team is a winner, and they're gonna keep winning. Believe me, folks, the Mets are going places, and it's all because of their tremendous players and coaches. So let's give them a big round of applause, folks, for a truly fantastic season.

I think I may have lost the crux of paulsurovell's argument. 


ridski said:

I think I may have lost the crux of paulsurovell's argument. 

The crux of all of his arguments in this thread is that Musk is a great,great man and we should shut up.


drummerboy said:

ridski said:

I think I may have lost the crux of paulsurovell's argument. 

The crux of all of his arguments in this thread is that Musk is a great,great man and we should shut up.

or possibly that Musk is a great, great man who may occasionally do some crappy things, but other people do crappy things too, and we should shut up.


DaveSchmidt said:

paulsurovell said:

drummerboy said:

paulsurovell said:

nohero said:

Damn, now it's woke A.I. that Elon has us worried about.

It's AI in general.

do you have the slightest clue that Musk/Tesla is a major developer of AI for it's "self-driving" cars?

C'mon, man.

The subject of the article is Musk's signing on to a letter warning about the risks of AI applied to weaponry.

Ah, so Elon doesn’t have us worried about “AI in general.” I’m relieved.

Perhaps whoever you mean by "us" doesn't have to be worried, but Elon is worried. He wants all advanced AI research to be regulated, including Tesla's

https://techcrunch.com/2020/02/18/elon-musk-says-all-advanced-ai-development-should-be-regulated-including-at-tesla/


drummerboy said:

ridski said:

I think I may have lost the crux of paulsurovell's argument. 

The crux of all of his arguments in this thread is that Musk is a great,great man and we should shut up.

Not the crux of my arguments in this thread, the crux of my arguments in your head.


ml1 said:

drummerboy said:

ridski said:

I think I may have lost the crux of paulsurovell's argument. 

The crux of all of his arguments in this thread is that Musk is a great,great man and we should shut up.

or possibly that Musk is a great, great man who may occasionally do some crappy things, but other people do crappy things too, and we should shut up.

Please don't shut up.


ridski said:

I think I may have lost the crux of paulsurovell's argument. 

Maybe because you said you weren't reading them?


nohero said:

paulsurovell said:

nohero said:

Damn, now it's woke A.I. that Elon has us worried about.

It's AI in general.

Elon, along with many other experts, has been warning about it for years.

https://www.washingtonpost.com/news/innovations/wp/2017/08/21/elon-musk-calls-for-ban-on-killer-robots-before-weapons-of-terror-are-unleashed/

Elon Musk calls for ban on killer robots before ‘weapons of terror’ are unleashed
...
Last month, Musk told a group of governors that they need to start regulating artificial intelligence, which he called a “fundamental risk to the existence of human civilization.” When pressed for concrete guidance, Musk said the government must get a better understanding of AI before it’s too late.

“Once  there is awareness, people will be extremely afraid, as they should be,” Musk said. “AI is a fundamental risk to the future of human civilization in a way that car accidents, airplane crashes, faulty drugs  or bad food were not. They were harmful to a set of individuals in society, but they were not harmful to individuals as a whole.”

Musk's warning about "awareness" (shades of "The Terminator") is different from the normal set of warnings about the risks of relying on A.I.

You'll be happy to know I read a good book about that a couple of years ago, called "Army of None", which goes into much more detail about real dangers vs. those in the popular imagination that movies feature.

In general, raising the subject of A.I. weapons, in response to Musk stoking ignorance about A.I. being used in a different setting, is a ridiculous "whatabout".

Musk relies on his reputation as tech-savvy, so when he amplifies an uninformed right-wing grievance like that it's a problem.

You don't think that political bias can be injected into AI?


nohero said:

paulsurovell said:

Apart from Greene's support for Kevin McCarthy for House Speaker are there meaningful differences between Greene and Matt Gaetz?

Are you contending that the interview of Gaetz on MSNBC was similar in tone and acceptance of his representations, as Greenwald's interview of Marjorie Taylor Greene?

If not, then you didn't give a good counter-example.

I watched Ari's interview with Gaetz. It was very amicable and at the end he asked Gaetz if he would come back and Matt agreed enthusiastically.

ETA: And Ari mentioned this wasn't his first MSNBC interview with Gaetz.


drummerboy said:

paulsurovell said:

drummerboy said:

here's the genius GG thinks is worth promoting

Apart from Greene's support for Kevin McCarthy for House Speaker are there meaningful differences between Greene and Matt Gaetz?

That's some weak whatabout you got there.

When Ari Melber gives Matt Gaetz a full hour for him to bloviate insanity, give me a call.

And, BTW, Greene is quite different than Gaetz. Besides being publicly a lot more stupid, she's also a lot more incendiary.

You sound pro-Gaetz. That makes you pro-Putin.


paulsurovell said:

drummerboy said:

paulsurovell said:

drummerboy said:

here's the genius GG thinks is worth promoting

Apart from Greene's support for Kevin McCarthy for House Speaker are there meaningful differences between Greene and Matt Gaetz?

That's some weak whatabout you got there.

When Ari Melber gives Matt Gaetz a full hour for him to bloviate insanity, give me a call.

And, BTW, Greene is quite different than Gaetz. Besides being publicly a lot more stupid, she's also a lot more incendiary.

You sound pro-Gaetz. That makes you pro-Putin.

Avoid the point.

As usual.


That's hardly avoiding it, Ridksi. Perhaps if you'd gone with

,

Or maybe even

;


paulsurovell said:

ridski said:

I think I may have lost the crux of paulsurovell's argument. 

Maybe because you said you weren't reading them?

I skip most of your posts because they’re inane, circular, smarmy, and repetitive. I skip most of your posts because I don’t think you actually have an argument to make; your every participation on MOL appears eristic, so why bother engaging?


I just call him Pablo in order to help him become less eristic …but he’s not taking the bait… the man just loves to argue.


Just learned a new word, "eristic,"

paulsurovell said:

nohero said:

paulsurovell said:

Apart from Greene's support for Kevin McCarthy for House Speaker are there meaningful differences between Greene and Matt Gaetz?

Are you contending that the interview of Gaetz on MSNBC was similar in tone and acceptance of his representations, as Greenwald's interview of Marjorie Taylor Greene?

If not, then you didn't give a good counter-example.

I watched Ari's interview with Gaetz. It was very amicable and at the end he asked Gaetz if he would come back and Matt agreed enthusiastically.

ETA: And Ari mentioned this wasn't his first MSNBC interview with Gaetz.

Then you know that the interviews weren't comparable, which was my point.


paulsurovell said:

nohero said:

paulsurovell said:

nohero said:

Damn, now it's woke A.I. that Elon has us worried about.

It's AI in general.

Elon, along with many other experts, has been warning about it for years.

https://www.washingtonpost.com/news/innovations/wp/2017/08/21/elon-musk-calls-for-ban-on-killer-robots-before-weapons-of-terror-are-unleashed/ ...

Musk's warning about "awareness" (shades of "The Terminator") is different from the normal set of warnings about the risks of relying on A.I.

You'll be happy to know I read a good book about that a couple of years ago, called "Army of None", which goes into much more detail about real dangers vs. those in the popular imagination that movies feature.

In general, raising the subject of A.I. weapons, in response to Musk stoking ignorance about A.I. being used in a different setting, is a ridiculous "whatabout".

Musk relies on his reputation as tech-savvy, so when he amplifies an uninformed right-wing grievance like that it's a problem.

You don't think that political bias can be injected into AI?

Just so I'm clear, it seems you shifted the topic of your response, after I responded to your "Elon and the KIller Robots" response.

As for this (You don't think that political bias can be injected into AI?), the short answer is, "As a general matter, of course it can, depending on the input used." But the topic of Elon's "opining" wasn't general, it was about a specific application.  Furthermore, that application has been used by a very large "test group", and there is information about how its designed.

On the basis of something a random commenter wrote on the Twitter, Elon said that there was a concern with the application.  Elon in theory has more tech expertise than to reach a conclusion like that. 

By the way, the reason I used a screen shot instead of embedding is because Elon "locked" his tweets - based on a yet another random theory that the Twitter algorithm may make "locked" tweets more available to a larger audience. 


Even though he was amplifying an uninformed claim that the "ChatGPT" engineers had built in a political bias, at least Elon gave an ego boost to the guy who pushed it.


nohero said:

By the way, the reason I used a screen shot instead of embedding is because Elon "locked" his tweets - based on a yet another random theory that the Twitter algorithm may make "locked" tweets more available to a larger audience. 


Wait, doesn't Elon Musk own twitter? Can't he just ask the people who built it how the algorithm works? Wait, those people were fired? Well who made that decision? Oh.


In order to add a comment – you must Join this community – Click here to do so.