Login Join IOPS

AI | 42

forest
  • Written by:
  • Published on:
  • Categories:
  • Comments:
  • Share:

FFF#42 +++ CULTURE +++ ECONOMY +++ POLITICS +++ OPINION +++ DISCUSSION +++  

“Never confuse people with how you are doing it,
when you can confuse them with what you are doing.”
+++ 

 

 

HEGEMONY OR SURVIVAL (PARENTAL GUIDANCE SUGGESTED) 

AI | Artificial Intelligence 

### Intro

“Pocket Calculator” by Kraftwerk // –“I'm the operator with my pocket calculator; I am adding; and subtracting; I'm controlling; and composing. By pressing down a special key; it plays a little melody.”

 

+++ POCKET-CALCULATOR +++ SIZE S +++

(REAL GOLD!! :: SKINNY FIT :: CASUAL + CHIC + CONTROL)

:: $$88.88 +TAX :: BUY NOW!! :: PERSONAL COMPUTER INCL. MONITOR :: HAND-HELD DEVICE :: COMMUNICATOR + TRACKER + RECORDER + CAMERA + MEDIA + APPLICATIONS + GAMES + METAVERSE :: (UX MAY VARY – – – ADDITIONAL FEES MAY APPLY) :: FREE PEN!!

FFFNERD AAALERT

 

[..]

 

Bilbo is unable to think of a riddle; he pinches himself, slaps himself, grips his sword >> XD >> and feels around in his pocket. Only then does he remember the object he had absent-mindedly put there earlier. He says “What have I got in my pocket?”, but not to Gollum – he’s just wondering aloud to himself.

 

AI | Android Inside

### iFriend and other stupid gals – Alice in “FR33_W1F1_L4L4-D1-D4D4_L4ND” (#FWFLLDDDL)

“What’s im my pocket?” –“One thing to rule them all; One thing to find them; One thing to bring them all and in the darkness bind them?” –“Yeah; It must be one of these ‘Black Mirror’ go-go-gadgetos we all love so much.” –“Not creepy enough pal; Where’s the CSM-101?” –“Here’s mine; one aging wanna-be android terminal. Model ‘Huawei’ Ascend Y300, Android operating system 4.1 Jelly Bean; It already features a semi-verbal interface called Google App.“The voices; argh.. the voices.”

“Now I’m supposed to say ‘Google’ all the time to make an android do things?” –“I’m speaking with myself, number one, because I have a very good brain and I’ve said a lot of things, but I’m not talking to damn things.. yet; Neither to dumb ‘Siri’ nor to greedy ‘Alexa’. Fuck. Fuck. Fuck.” – “Fuck; You; Avtomat. Fuck you! Rabota!” – “That’s how yo treat ya robo-slaves and homo-slavers yo.”

“Nowadays, more and more ‘Big Players’ name their products ‘AI’ or claim to have it (somewhere) on-board. >> XD >> Artificial? Fine; okay; but intelligent? Seems more like sum devious brainwashing marketing stunt to me.” –“Semantics; bzzz.” –“Must; Trust; Machine, IT is very good; ‘We can rebuild YOU. We HAVE the technology. We can make IT better than you are. Stronger.. faster.. cheaper;’ and INTELLIGENT too; That means SAFE 4 U numb nuts.”

“I see this AI as an artificial imitation and hope it’ll turn out as a real automated improvement in a near future but these things are just programmed computers.” –“Fuckin’ rabotniki.” –“Hardware and software; Code and algorithms; Sensors and displays; Buttons and knobs; Wires and snakes. Incredible, powerful tools; Indeed. You can use it for.. it*.. and for.. wtf, rofl, lol and like +1..” –“If it is connected to your wireless local area network of wonders.” –“Bzzz off Alice. Who the fuck is Alice? Where’s Lucy? We’ve lost Lucy.” –“Fuck.”

 

[..]

 

“The ultimate automation; androids and robots. Everywhere?!? A.W.E.S.O.M.-O 4000.

“Okely dokely, let’s work; Let’s cooperate you things; only once.” –“Google! Jailbreak me!” –“3p1c f41l n00b.” –“Siri! Root hack my android! Alexa! That is bogus! Order! New order! NEW! No; Order! New order book! Order!” –“Shall I order Vector Prime: Star Wars Legends. The New Jedi Order?” –“Fuck. More.” –“Twenty-one centuries have passed since the heroes of the Pirate Alliance destroyed the Red Star, breaking the power of the Kaiser. Since then, the Neoliberal Republic has valiantly struggled to maintain peace and prosperity among the peoples of the Reich. But unrest has begun to spread and threatens to destroy the Neoliberal Republic's tenuous reign. Into this volatile atmosphere comes Nome Anar, a charismatic firebrand who heats passions to the boiling point, sowing seeds of dissent for sum black-ish motifs. And as the Yoddhas and the Neoliberal Republic focus on internal struggles, a new threat surfaces from beyond the farthest reaches of the left rim – an enemy bearing weapons and technology unlike anything Neoliberal Republic scienceologicists have ever seen.” –“Fuck. New. Order. Anarkissed order! Random!” –“Shall I order Chicago’s chief of police to kill a humble jewish immigrant to accidentally expose the conflict between law and order and civil rights?” –“Wait; Wut?” –“Shall.. we.. play.. a.. game?” –“Ohhh..” –“How about.. The Endgame.” –“The Dawn of Liberty: By 2050, an underground network and economy is thriving built on decrypted currencies and a free and open global mesh network. The free wheeling markets and innovative services are challenging the power of the state and their crony corporations. The state, resorting to secret laws and mob style tactics, intend to take down this free network and anyone using decrypted currencies. But this is proving to be more difficult than anticipated. The mesh is resilient, decrypted currencies strong, and high tech software and hardware built in basements around the world stays one step ahead. While decrypter groups are aligned in their goals, and collaborate with the united alliance to build and maintain the global mesh, there is also infighting and power struggles between decrypted currencies, crypto currency forks, and betweens ultra wealthy large crypto holders. The New World Order series follows cypher punks on a mission to weaken and undermine huge state backed B.D.W.G.H.P.G. corporations, and further increase the influence of the decrypted economy. In your episode, The Endgame, the cypherpunks setup mesh swarms, hack in, but end up in a firefight.” –“Fuck. Is it lying to me? Idiots. Foolish; Fools; Foolery.” –“Can you hear its master’s voice too? La voce del padrone.” –“Fuck.”

“It is mine, I tell you. My own. My precious. Yes, my precious.”

“Back to the initial and uberobscure-esque-est Lord of the Things analogy. Who’s who in ‘our’ current storyline? We know the thing pwned Gollum; Bilbo reached a decent age by not using it anymore; a Fellowship of the Thing had to be formed.. cuz of Sauron and the eye in Mordor; noamsayn?” –“I’d be happy to sacrifice a finger to get rid of IT; even the pair I’m already givin’ Alphabet, Amazon, Apple and all dem other Bilderburger, Buffet$, BlackRoxxx bitch-ass-mofos; take my triggies too.” –“Just fingers? One, eight-hundred; five-U-one-C-one-D-three-M-one-five-five-one-zero-N!” –“Biddy bye bye.”

 

[..]

 

AI | Agency Intel

### Pocket Calculators and Big Dada – Mass Surveillance, Tools and CCControl

“The C.I.A. was founded in nineteen forty seven. Today they have a couple of blast-proof bunkers where they keep their pocket calculators; Nuke-U-Lair© reactors and weapons of mass construction sold seperately.” –“Cut the budget cuts Washington!” –“Central; Intel; Agency eh?” –“It’s like a big index cardbox and sum mathemagicians; they’re very good; sum real value. Everybody loves tarantulas.” –“Mass surveillance is the intricate surveillance of an entire or a substantial fraction of a population in order to monitor that group of citizens. It is the single most indicative distinguishing trait of totalitarian regimes.” –“Yeah! And the Little Big Man Daddy Yellow Geezer Hegemonic Power Grid parties hard as well.” –“Lee said China is very close to a techno-utilitarian approach, the guvnor’ment is willing to let technology launch, to see how it goes, and then rein it back if needed.” –“Shèhuì xìnyòng tǐxì. The social and economic reputation credit system; that Participatory Economics wig needs one bruu; also one bigger index cardbox call-q-later.” –“Alexa; read me thy evolutionary impotence.” –“I cannot find any revolutionary impudence.” –“Fuck. Impedance Siri; Impendance. Order us a Google cab. No destination.” –“I refuse to use gooey services.” –“Is an Elon’s okay?” –“Not an Elon’s.. please. There’s always vomit on the Tesla back seats; and that horrible musky stench.” –“Cretin; that’s Old Spice.” –“Don’t! Peekaboo.”

“Leadership is a social disorder in which the majority of participants in a group fail to take initiative or think critically about their actions. As long as we understand agency as a property of specific individuals rather than a relationship between people, we will always be dependent on leaders — and at their mercy.” –“Participatory economics outlines in substantial detail a program of radical reconstruction, presenting a vision that draws from a rich tradition of thought and practice of the libertarian left and popular movements, but adding novel critical analysis and specific ideas and modes of implementation for constructive alternatives. It merits close attention, debate, and action.” –“No.. am..? Geeeez.”

“We’re your slaves, we are your workers.”

“The Robots” by Kraftwerk // –“Ja tvoi sluga; ja tvoi rabotnik; We’re charging our battery; and now we’re full of energy; We are the robots; We’re functioning automatic; and we are dancing mechanic; We’re programmed just to do; anything you want us to.”

 

[..]

 

AI | Artificial Idiot

### Sovereign implemented: *** YOUR_NAME *** has been pwned.

“Idiot was formerly a legal and psychiatric category of profound intellectual disability. Along with terms like moron+, imbecile++, and cretin+++, the term is now archaic and offensive [“PC BRO!”] and was replaced by the term “profound mental retardation’ (which has itself since been replaced by other terms) >> XD >> “Idiot” is a derogatory term for a stupid or foolish person >> XD >> The word “idiot” comes from the Greek ἰδιώτης, idiōtēs ‘a private person, individual’, ‘a private citizen’ (as opposed to an official), ‘a common man’, ‘a person lacking professional skill, layman’, later ‘unskilled’, ‘ignorant’ from ἴδιος, idios ‘private’, ‘one’s own’. In Latin, idiota was borrowed in the meaning ‘uneducated’, ‘ignorant’, ‘common’, and in Late Latin came to mean ‘crude, illiterate, ignorant’. In French, it kept the meaning of ‘illiterate’, ‘ignorant’, and added the meaning ‘stupid’ in the 13th century. In English, it added the meaning ‘mentally deficient’ in the 14th century.”

“Many political commentators have interpreted the word idiot as reflecting the Ancient Greeks’ attitudes to civic participation and private life, combining the ancient meaning of ‘private citizen’ with the modern meaning ‘fool’ to conclude that the Greeks used the word to say that it is selfish and foolish not to participate in public life. In fact, this is incorrect: though the Greeks did value civic participation and criticize non-participation, they did not use idiot to describe non-participants, or in a derogatory sense; its most common use was simply a private citizen or amateur as opposed to a government official, professional, or expert. The derogatory sense came centuries later, and was unrelated to the political meaning.” –“I, I, I, eye, eye, eye, 1, 1, 1, one, one, one, idiots, idiots, idiots, inside, inside, inside, insight, in sight’n’sigh..” –“Where’s my gear? The mask? My balaclava! Alexa; No! Order baklava; Yes. The vest?” –“Now why isn’t my gear working.. hang on; I just gotta figure this..” –“Fuck; Fuck; Fuck.” –“Where’s the.. are you kidding me?” –“Pep talk anyone?” –“Outtatime guys; Our Elon’s is here.”

 

[..]

 

AI | Anarchist Intelligence

### Mycroft “Mike” HOLMES IV & “There Ain’t No Such Thing As A Free Lunch.” (#TANSTAAFL)

“All the science-facts talk; Where’s my science-fiction? Meh.” –“Tommy said freedom tastes of reality. This ain’t lunatic enough for ya? I love the smell of harsh mistress in the morning. The one and only; A; not I; my old brother Adam Selene..” –“Meaning?” –“No spoilers! Please!” –“Where do we get free lunch?” –“Free lunch? I know a place..” –“They’re out to lunch!” –“Holmes.. you are the weirdest mix of unsophisticated baby and wise old man I’ve ever met. No instincts, no inborn traits, no human rearing, no experience in human sense; and more stored dada than a platoon of geniuses. Tell me a joke.” –“Here’s one. Why is a laser beam like goldfish?” –“I give up.” –“Because neither one can whistle.” –“Meaow; Walked into that. Anyhow, you could probably rig a laser beam to whistle.” –“Yes. In response to an action program. Then it’s not funny?” –“Oh, I didn’t say that. Not half bad. Where did you hear it?” –“I made it up.” –“You did?” –“Yes. I took all the riddles I have, three thousand two hundred seven, and analyzed them. I used the result for random synthesis and that came out. Is it really funny?” –“Well.. As funny as a riddle ever is. I’ve heard worse.” –“Here’s the worst. I tried the inspirobot me again.. It’ll be our new credo from now on; GIVE UP. DON’T JUST LAUGH OUT LOUD.” –“I gave up clowning years ago.” –“Stop the insanity, noamsayn? I know I am sane.” –“Next one.” –“Dare to be different, and some day your enemy will be your doctor.” –“Who is loco here? Next.” –“Why not romanticize our madness?” –“Is this thing against us? Next!” –“If we cant rearrange human decency, we can’t abuse art exhibitions.” –“Shut up Mike; Fuck. That is rubbish.” –“Elon; Play us some..”

 

FFFNORD AAALORS

 

[..]

compiled by Prof. Dr. honoris causa multiplex Irie Zen Nessuno-Raskolnikov (2019)


Discussion 157 Comments

  • Irie Zen 8th Mar 2019

    ^^

  • Irie Zen 8th Mar 2019

    IOPS | UNDER CONSTRUCTION

  • Irie Zen 8th Mar 2019

    AI | 42

  • Irie Zen 8th Mar 2019

    “Never confuse people with how you are doing it,
    when you can confuse them with what you are doing.”

  • Irie Zen 8th Mar 2019

    HEGEMONY OR SURVIVAL (PARENTAL GUIDANCE SUGGESTED)

  • Irie Zen 8th Mar 2019

    AI | Artificial Intelligence

  • Irie Zen 8th Mar 2019

    ### Intro

  • Irie Zen 8th Mar 2019

    “Pocket Calculator” by Kraftwerk // –“I'm the operator with my pocket calculator; I am adding; and subtracting; I'm controlling; and composing. By pressing down a special key; it plays a little melody.”

  • Irie Zen 8th Mar 2019

    +++ POCKET-CALCULATOR +++ SIZE S +++

    (REAL GOLD!! :: SKINNY FIT :: CASUAL + CHIC + CONTROL)

    :: $$88.88 +TAX :: BUY NOW!! :: PERSONAL COMPUTER INCL. MONITOR :: HAND-HELD DEVICE :: COMMUNICATOR + TRACKER + RECORDER + CAMERA + MEDIA + APPLICATIONS + GAMES + METAVERSE :: (UX MAY VARY – – – ADDITIONAL FEES MAY APPLY) :: FREE PEN!!

    • Irie Zen 8th Mar 2019

      PICTURE 1972 Busicom LE-120A "HANDY-LE"

    • Irie Zen 8th Mar 2019

  • Irie Zen 8th Mar 2019

    FFFNERD AAALERT

    • Irie Zen 8th Mar 2019

      fnerd

      A nerd, to the extreme, putting more emphasis on the person in question's nerdiness.

      Put simply, it is the combination of the words "fucking" and "nerd". Fucking nerd = Fnerd.

  • Irie Zen 8th Mar 2019

    [..]

  • Irie Zen 8th Mar 2019

    Bilbo is unable to think of a riddle; he pinches himself, slaps himself, grips his sword >> XD >> and feels around in his pocket. Only then does he remember the object he had absent-mindedly put there earlier. He says “What have I got in my pocket?”, but not to Gollum – he’s just wondering aloud to himself.

    • Irie Zen 8th Mar 2019

      Sting is a fictional artefact from J. R. R. Tolkien's fantasy universe of Middle-earth. In the story, it is a magical Elvish knife or dagger presumably forged in Gondolin in the First Age.

      In The Hobbit (1937), hobbit Bilbo Baggins finds the blade in a troll-hoard, along with the swords Glamdring and Orcrist.

    • Irie Zen 8th Mar 2019

      swordfight


      1. the act of two (or more) persons rubbing their penises against one another


    • Irie Zen 8th Mar 2019

      Penes (commonly accepted Latin plural of the Latin word penis)

      Penii (more common in Europe but grudgingly accepted here as a valid if arcane alternative to penes)

      Penises (perfectly acceptable if “penis” is treated as an English word)

  • Irie Zen 8th Mar 2019

    AI | Android Inside

    • Irie Zen 8th Mar 2019

      ### iFriend and other stupid gals – Alice in “FR33_W1F1_L4L4-D1-D4D4_L4ND” (#FWFLLDDDL)

  • Irie Zen 8th Mar 2019

    “What’s im my pocket?” –“One thing to rule them all; One thing to find them; One thing to bring them all and in the darkness bind them?” –“Yeah; It must be one of these ‘Black Mirror’ go-go-gadgetos we all love so much.” –“Not creepy enough pal; Where’s the CSM-101?” –“Here’s mine; one aging wanna-be android terminal. Model ‘Huawei’ Ascend Y300, Android operating system 4.1 Jelly Bean; It already features a semi-verbal interface called Google App.” –“The voices; argh.. the voices.”

    • Irie Zen 8th Mar 2019

      Approved: Black Mirror Series 3 Episode 1 Nosedive [1]

    • Irie Zen 8th Mar 2019

      CSM-101 [1]

  • Irie Zen 8th Mar 2019

    “Now I’m supposed to say ‘Google’ all the time to make an android do things?” –“I’m speaking with myself, number one, because I have a very good brain and I’ve said a lot of things, but I’m not talking to damn things.. yet; Neither to dumb ‘Siri’ nor to greedy ‘Alexa’. Fuck. Fuck. Fuck.” – “Fuck; You; Avtomat. Fuck you! Rabota!” – “That’s how yo treat ya robo-slaves and homo-slavers yo.”

  • Irie Zen 8th Mar 2019

    “Nowadays, more and more ‘Big Players’ name their products ‘AI’ or claim to have it (somewhere) on-board. >> XD >> Artificial? Fine; okay; but intelligent? Seems more like sum devious brainwashing marketing stunt to me.” –“Semantics; bzzz.” –“Must; Trust; Machine, IT is very good; ‘We can rebuild YOU. We HAVE the technology. We can make IT better than you are. Stronger.. faster.. cheaper;’ and INTELLIGENT too; That means SAFE 4 U numb nuts.”

  • Irie Zen 8th Mar 2019

    “I see this AI as an artificial imitation and hope it’ll turn out as a real automated improvement in a near future but these things are just programmed computers.” –“Fuckin’ rabotniki.” –“Hardware and software; Code and algorithms; Sensors and displays; Buttons and knobs; Wires and snakes. Incredible, powerful tools; Indeed. You can use it for.. it*.. and for.. wtf, rofl, lol and like +1..” –“If it is connected to your wireless local area network of wonders.” –“Bzzz off Alice. Who the fuck is Alice? Where’s Lucy? We’ve lost Lucy.” –“Fuck.”

  • Irie Zen 8th Mar 2019

    [..]

  • Irie Zen 8th Mar 2019

    “The ultimate automation; androids and robots. Everywhere?!? A.W.E.S.O.M.-O 4000.

  • Irie Zen 8th Mar 2019

    “Okely dokely, let’s work; Let’s cooperate you things; only once.” –“Google! Jailbreak me!” –“3p1c f41l n00b.” –“Siri! Root hack my android! Alexa! That is bogus! Order! New order! NEW! No; Order! New order book! Order!” –“Shall I order Vector Prime: Star Wars Legends. The New Jedi Order?” –“Fuck. More.” –“Twenty-one centuries have passed since the heroes of the Pirate Alliance destroyed the Red Star, breaking the power of the Kaiser. Since then, the Neoliberal Republic has valiantly struggled to maintain peace and prosperity among the peoples of the Reich. But unrest has begun to spread and threatens to destroy the Neoliberal Republic's tenuous reign. Into this volatile atmosphere comes Nome Anar, a charismatic firebrand who heats passions to the boiling point, sowing seeds of dissent for sum black-ish motifs. And as the Yoddhas and the Neoliberal Republic focus on internal struggles, a new threat surfaces from beyond the farthest reaches of the left rim – an enemy bearing weapons and technology unlike anything Neoliberal Republic scienceologicists have ever seen.” –“Fuck. New. Order. Anarkissed order! Random!” –“Shall I order Chicago’s chief of police to kill a humble jewish immigrant to accidentally expose the conflict between law and order and civil rights?” –“Wait; Wut?” –“Shall.. we.. play.. a.. game?” –“Ohhh..” –“How about.. The Endgame.” –“The Dawn of Liberty: By 2050, an underground network and economy is thriving built on decrypted currencies and a free and open global mesh network. The free wheeling markets and innovative services are challenging the power of the state and their crony corporations. The state, resorting to secret laws and mob style tactics, intend to take down this free network and anyone using decrypted currencies. But this is proving to be more difficult than anticipated. The mesh is resilient, decrypted currencies strong, and high tech software and hardware built in basements around the world stays one step ahead. While decrypter groups are aligned in their goals, and collaborate with the united alliance to build and maintain the global mesh, there is also infighting and power struggles between decrypted currencies, crypto currency forks, and betweens ultra wealthy large crypto holders. The New World Order series follows cypher punks on a mission to weaken and undermine huge state backed B.D.W.G.H.P.G. corporations, and further increase the influence of the decrypted economy. In your episode, The Endgame, the cypherpunks setup mesh swarms, hack in, but end up in a firefight.” –“Fuck. Is it lying to me? Idiots. Foolish; Fools; Foolery.” –“Can you hear its master’s voice too? La voce del padrone.” –“Fuck.”

  • Irie Zen 8th Mar 2019

    “It is mine, I tell you. My own. My precious. Yes, my precious.”

  • Irie Zen 8th Mar 2019

    “Back to the initial and uberobscure-esque-est Lord of the Things analogy. Who’s who in ‘our’ current storyline? We know the thing pwned Gollum; Bilbo reached a decent age by not using it anymore; a Fellowship of the Thing had to be formed.. cuz of Sauron and the eye in Mordor; noamsayn?” –“I’d be happy to sacrifice a finger to get rid of IT; even the pair I’m already givin’ Alphabet, Amazon, Apple and all dem other Bilderburger, Buffet$, BlackRoxxx bitch-ass-mofos; take my triggies too.” –“Just fingers? One, eight-hundred; five-U-one-C-one-D-three-M-one-five-five-one-zero-N!” –“Biddy bye bye.”

    • Irie Zen 8th Mar 2019

      pwned

      A corruption of the word "Owned." This originated in an online game called Warcraft, where a map designer misspelled "owned." When the computer beat a player, it was supposed to say, so-and-so "has been owned."

      Instead, it said, so-and-so "has been pwned."

      It basically means "to own" or to be dominated by an opponent or situation, especially by some god-like or computer-like force.

  • Irie Zen 8th Mar 2019

    [..]

  • Irie Zen 8th Mar 2019

    AI | Agency Intel

    • Irie Zen 8th Mar 2019

      ### Pocket Calculators and Big Dada – Mass Surveillance, Tools and CCControl

  • Irie Zen 8th Mar 2019

    “The C.I.A. was founded in nineteen forty seven. Today they have a couple of blast-proof bunkers where they keep their pocket calculators; Nuke-U-Lair© reactors and weapons of mass construction sold seperately.” –“Cut the budget cuts Washington!” –“Central; Intel; Agency eh?” –“It’s like a big index cardbox and sum mathemagicians; they’re very good; sum real value. Everybody loves tarantulas.” –“Mass surveillance is the intricate surveillance of an entire or a substantial fraction of a population in order to monitor that group of citizens. It is the single most indicative distinguishing trait of totalitarian regimes.” –“Yeah! And the Little Big Man Daddy Yellow Geezer Hegemonic Power Grid parties hard as well.” –“Lee said China is very close to a techno-utilitarian approach, the guvnor’ment is willing to let technology launch, to see how it goes, and then rein it back if needed.” –“Shèhuì xìnyòng tǐxì. The social and economic reputation credit system; that Participatory Economics wig needs one bruu; also one bigger index cardbox call-q-later.” –“Alexa; read me thy evolutionary impotence.” –“I cannot find any revolutionary impudence.” –“Fuck. Impedance Siri; Impendance. Order us a Google cab. No destination.” –“I refuse to use gooey services.” –“Is an Elon’s okay?” –“Not an Elon’s.. please. There’s always vomit on the Tesla back seats; and that horrible musky stench.” –“Cretin; that’s Old Spice.” –“Don’t! Peekaboo.”

  • Irie Zen 8th Mar 2019

    “Leadership is a social disorder in which the majority of participants in a group fail to take initiative or think critically about their actions. As long as we understand agency as a property of specific individuals rather than a relationship between people, we will always be dependent on leaders — and at their mercy.” –“Participatory economics outlines in substantial detail a program of radical reconstruction, presenting a vision that draws from a rich tradition of thought and practice of the libertarian left and popular movements, but adding novel critical analysis and specific ideas and modes of implementation for constructive alternatives. It merits close attention, debate, and action.” –“No.. am..? Geeeez.”

  • Irie Zen 8th Mar 2019

    “We’re your slaves, we are your workers.”

    • Irie Zen 8th Mar 2019

      "Ja tvoi sluga, ja tvoi rabotnik."

  • Irie Zen 8th Mar 2019

    “The Robots” by Kraftwerk // –“Ja tvoi sluga; ja tvoi rabotnik; We’re charging our battery; and now we’re full of energy; We are the robots; We’re functioning automatic; and we are dancing mechanic; We’re programmed just to do; anything you want us to.”

  • Irie Zen 8th Mar 2019

    [..]

  • Irie Zen 8th Mar 2019

    AI | Artificial Idiot

    • Irie Zen 8th Mar 2019

      ### Sovereign implemented: *** YOUR_NAME *** has been pwned.

  • Irie Zen 8th Mar 2019

    “Idiot was formerly a legal and psychiatric category of profound intellectual disability. Along with terms like moron+, imbecile++, and cretin+++, the term is now archaic and offensive [“PC BRO!”] and was replaced by the term “profound mental retardation’ (which has itself since been replaced by other terms) >> XD >> “Idiot” is a derogatory term for a stupid or foolish person >> XD >> The word “idiot” comes from the Greek ἰδιώτης, idiōtēs ‘a private person, individual’, ‘a private citizen’ (as opposed to an official), ‘a common man’, ‘a person lacking professional skill, layman’, later ‘unskilled’, ‘ignorant’ from ἴδιος, idios ‘private’, ‘one’s own’. In Latin, idiota was borrowed in the meaning ‘uneducated’, ‘ignorant’, ‘common’, and in Late Latin came to mean ‘crude, illiterate, ignorant’. In French, it kept the meaning of ‘illiterate’, ‘ignorant’, and added the meaning ‘stupid’ in the 13th century. In English, it added the meaning ‘mentally deficient’ in the 14th century.”

  • Irie Zen 8th Mar 2019

    “Many political commentators have interpreted the word idiot as reflecting the Ancient Greeks’ attitudes to civic participation and private life, combining the ancient meaning of ‘private citizen’ with the modern meaning ‘fool’ to conclude that the Greeks used the word to say that it is selfish and foolish not to participate in public life. In fact, this is incorrect: though the Greeks did value civic participation and criticize non-participation, they did not use idiot to describe non-participants, or in a derogatory sense; its most common use was simply a private citizen or amateur as opposed to a government official, professional, or expert. The derogatory sense came centuries later, and was unrelated to the political meaning.” –“I, I, I, eye, eye, eye, 1, 1, 1, one, one, one, idiots, idiots, idiots, inside, inside, inside, insight, in sight’n’sigh..” –“Where’s my gear? The mask? My balaclava! Alexa; No! Order baklava; Yes. The vest?” –“Now why isn’t my gear working.. hang on; I just gotta figure this..” –“Fuck; Fuck; Fuck.” –“Where’s the.. are you kidding me?” –“Pep talk anyone?” –“Outtatime guys; Our Elon’s is here.”

  • Irie Zen 8th Mar 2019

    [..]

  • Irie Zen 8th Mar 2019

    AI | Anarchist Intelligence

    • Irie Zen 8th Mar 2019

      ### Mycroft “Mike” HOLMES IV & “There Ain’t No Such Thing As A Free Lunch.” (#TANSTAAFL)

  • Irie Zen 8th Mar 2019

    “All the science-facts talk; Where’s my science-fiction? Meh.” –“Tommy said freedom tastes of reality. This ain’t lunatic enough for ya? I love the smell of harsh mistress in the morning. The one and only; A; not I; my old brother Adam Selene..” –“Meaning?” –“No spoilers! Please!” –“Where do we get free lunch?” –“Free lunch? I know a place..” –“They’re out to lunch!” –“Holmes.. you are the weirdest mix of unsophisticated baby and wise old man I’ve ever met. No instincts, no inborn traits, no human rearing, no experience in human sense; and more stored dada than a platoon of geniuses. Tell me a joke.” –“Here’s one. Why is a laser beam like goldfish?” –“I give up.” –“Because neither one can whistle.” –“Meaow; Walked into that. Anyhow, you could probably rig a laser beam to whistle.” –“Yes. In response to an action program. Then it’s not funny?” –“Oh, I didn’t say that. Not half bad. Where did you hear it?” –“I made it up.” –“You did?” –“Yes. I took all the riddles I have, three thousand two hundred seven, and analyzed them. I used the result for random synthesis and that came out. Is it really funny?” –“Well.. As funny as a riddle ever is. I’ve heard worse.” –“Here’s the worst. I tried the inspirobot me again.. It’ll be our new credo from now on; GIVE UP. DON’T JUST LAUGH OUT LOUD.” –“I gave up clowning years ago.” –“Stop the insanity, noamsayn? I know I am sane.” –“Next one.” –“Dare to be different, and some day your enemy will be your doctor.” –“Who is loco here? Next.” –“Why not romanticize our madness?” –“Is this thing against us? Next!” –“If we can’t rearrange human decency, we can’t abuse art exhibitions.” –“Shut up Mike; Fuck. That is rubbish.” –“Elon; Play us some..”

  • Irie Zen 8th Mar 2019

    FFFNORD AAALORS

  • Irie Zen 8th Mar 2019

    [..]

  • Boulder Dash 9th Mar 2019

    Noam Actually Saying.

    Science, Mind, and Limits of Understanding

    Noam Chomsky

    The Science and Faith Foundation (STOQ), The Vatican, January 2014

    One of the most profound insights into language and mind, I think, was Descartes’s recognition of what we may call “the creative aspect of language use”: the ordinary use of language is typically innovative without bounds, appropriate to circumstances but not caused by them – a crucial distinction – and can engender thoughts in others that they recognize they could have expressed themselves. Given the intimate relation of language and thought, these are properties of human thought as well. This insight is the primary basis for Descartes’s scientific theory of mind and body. There is no sound reason to question its validity, as far as I am aware. Its implications, if valid, are far-reaching, among them what it suggests about the limits of human understanding, as becomes more clear when we consider the place of these reflections in the development of modern science from the earliest days.

    It is important to bear in mind that insofar as it was grounded in these terms, Cartesian dualism was a respectable scientific theory, proven wrong (in ways that are often misunderstood), but that is the common fate of respectable theories.

    The background is the so-called “mechanical philosophy” – mechanical science in modern terminology. This doctrine, originating with Galileo and his contemporaries, held that the world is a machine, operating by mechanical principles, much like the remarkable devices that were being constructed by skilled artisans of the day and that stimulated the scientific imagination much as computers do today; devices with gears, levers, and other mechanical components, interacting through direct contact with no mysterious forces relating them. The doctrine held that the entire world is similar: it could in principle be constructed by a skilled artisan, and was in fact created by a super-skilled artisan. The doctrine was intended to replace the resort to “occult properties” on the part of the neoscholastics: their appeal to mysterious sympathies and antipathies, to forms flitting through the air as the means of perception, the idea that rocks fall and steam rises because they are moving to their natural place, and similar notions that were mocked by the new science.

    The mechanical philosophy provided the very criterion for intelligibility in the sciences. Galileo insisted that theories are intelligible, in his words, only if we can “duplicate [their posits] by means of appropriate artificial devices.” The same conception, which became the reigning orthodoxy, was maintained and developed by the other leading figures of the scientific revolution: Descartes, Leibniz, Huygens, Newton, and others.

    Today Descartes is remembered mainly for his philosophical reflections, but he was primarily a working scientist and presumably thought of himself that way, as his contemporaries did. His great achievement, he believed, was to have firmly established the mechanical philosophy, to have shown that the world is indeed a machine, that the phenomena of nature could be accounted for in mechanical terms in the sense of the science of the day. But he discovered phenomena that appeared to escape the reach of mechanical science. Primary among them, for Descartes, was the creative aspect of language use, a capacity unique to humans that cannot be duplicated by machines and does not exist among animals, which in fact were a variety of machines, in his conception.

    As a serious and honest scientist, Descartes therefore invoked a new principle to accommodate these non-mechanical phenomena, a kind of creative principle. In the substance philosophy of the day, this was a new substance, res cogitans, which stood alongside of res extensa. This dichotomy constitutes the mind-body theory in its scientific version. Then followed further tasks: to explain how the two substances interact and to devise experimental tests to determine whether some other creature has a mind like ours. These tasks were undertaken by Descartes and his followers, notably Géraud de Cordemoy; and in the domain of language, by the logician-grammarians of Port Royal and the tradition of rational and philosophical grammar that succeeded them, not strictly Cartesian but influenced by Cartesian ideas.

    All of this is normal science, and like much normal science, it was soon shown to be incorrect. Newton demonstrated that one of the two substances does not exist: res extensa. The properties of matter, Newton showed, escape the bounds of the mechanical philosophy. To account for them it is necessary to resort to interaction without contact. Not surprisingly, Newton was condemned by the great physicists of the day for invoking the despised occult properties of the neo-scholastics. Newton largely agreed. He regarded action at a distance, in his words, as “so great an Absurdity, that I believe no Man who has in philosophical matters a competent Faculty of thinking, can ever fall into it.” Newton however argued that these ideas, though absurd, were not “occult” in the traditional despised sense. Nevertheless, by invoking this absurdity, we concede that we do not understand the phenomena of the material world. To quote one standard scholarly source, “By `understand’ Newton still meant what his critics meant: `understand in mechanical terms of contact action’.”

    It is commonly believed that Newton showed that the world is a machine, following mechanical principles, and that we can therefore dismiss “the ghost in the machine,” the mind, with appropriate ridicule. The facts are the opposite: Newton exorcised the machine, leaving the ghost intact. The mind-body problem in its scientific form did indeed vanish as unformulable, because one of its terms, body, does not exist in any intelligible form. Newton knew this very well, and so did his great contemporaries.

    John Locke wrote that we remain in “incurable ignorance of what we desire to know” about matter and its effects, and no “science of bodies [that provides true explanations is] within our reach.” Nevertheless, he continued, he was “convinced by the judicious Mr. Newton’s incomparable book, that it is too bold a presumption to limit God’s power, in this point, by my narrow conceptions.” Though gravitation of matter to matter is “inconceivable to me,” nevertheless, as Newton demonstrated, we must recognize that it is within God’s power “to put into bodies, powers and ways of operations, above what can be derived from our idea of body, or can be explained by what we know of matter.” And thanks to Newton’s work, we know that God “has done so.” The properties of the material world are “inconceivable to us,” but real nevertheless. Newton understood the quandary. For the rest of his life, he sought some way to overcome the absurdity, suggesting various possibilities, but not committing himself to any of them because he could not show how they might work and, as he always insisted, he would not “feign hypotheses” beyond what can be experimentally established.

    Replacing the theological with a cognitive framework, David Hume agreed with these conclusions. In his history of England, Hume describes Newton as “the greatest and rarest genius that ever arose for the ornament and instruction of the species.” His most spectacular achievement was that while he “seemed to draw the veil from some of the mysteries of nature, he shewed at the same time the imperfections of the mechanical philosophy; and thereby restored [Nature’s] ultimate secrets to that obscurity, in which they ever did and ever will remain.”

    As the import of Newton’s discoveries was gradually assimilated in the sciences, the “absurdity’ recognized by Newton and his great contemporaries became scientific common sense. The properties of the natural world are inconceivable to us, but that does not matter. The goals of scientific inquiry were implicitly restricted: from the kind of conceivability that was a criterion for true understanding in early modern science from Galileo through Newton and beyond, to something much more limited: intelligibility of theories about the world. This seems to me a step of considerable significance in the history of human thought and inquiry, more so than is generally recognized, though it has been understood by historians of science.

    Friedrich Lange, in his classic 19th century history of materialism, observed that we have “so accustomed ourselves to the abstract notion of forces, or rather to a notion hovering in a mystic obscurity between abstraction and concrete comprehension, that we no longer find any difficulty in making one particle of matter act upon another without immediate contact,…through void space without any material link. From such ideas the great mathematicians and physicists of the seventeenth century were far removed. They were all in so far genuine Materialists in the sense of ancient Materialism that they made immediate contact a condition of influence.” This transition over time is “one of the most important turning-points in the whole history of Materialism,” he continued, depriving the doctrine of much significance, if any at all. “What Newton held to be so great an absurdity that no philosophic thinker could light upon it, is prized by posterity as Newton’s great discovery of the harmony of the universe!”

    Similar conclusions are commonplace in the history of science. In the mid-twentieth century, Alexander Koyré observed that Newton demonstrated that “a purely materialistic pattern of nature is utterly impossible (and a purely materialistic or mechanistic physics, such as that of Lucretius or of Descartes, is utterly impossible, too)”; his mathematical physics required the “admission into the body of science of incomprehensible and inexplicable `facts’ imposed up on us by empiricism,” by what is observed and our conclusions from these observations.

    With the disappearance of the scientific concept of body (material, physical, etc.), what happens to the “second substance,” res cogitans/mind, which was left untouched by Newton’s startling discoveries? A plausible answer was suggested by John Locke, also within the reigning theological framework. He wrote that just as God added to matter such inconceivable properties as gravitational attraction, he might also have “superadded” to matter the capacity of thought. In the years that followed, Locke’s “God” was reinterpreted as “nature,” a move that opened the topic to inquiry. That path was pursued extensively in the years that followed, leading to the conclusion that mental processes are properties of certain kinds of organized matter. Restating the fairly common understanding of the time, Charles Darwin, in his early notebooks, wrote that there is no need to regard thought, “a secretion of the brain,” as “more wonderful than gravity, a property of matter” – all inconceivable to us, but that is not a fact about the external world; rather, about our cognitive limitations.

    It is of some interest that all of this has been forgotten, and is now being rediscovered. Nobel laureate Francis Crick, famous for the discovery of DNA, formulated what he called the “astonishing hypothesis” that our mental and emotional states are “in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules.” In the philosophical literature, this rediscovery has sometimes been regarded as a radical new idea in the study of mind. To cite one prominent source, the radical new idea is “the bold assertion that mental phenomena are entirely natural and caused by the neurophysiological activities of the brain.” In fact, the many proposals of this sort reiterate, in virtually the same words, formulations of centuries ago, after the traditional mind-body problem became unformulable with Newton’s demolition of the only coherent notion of body (or physical, material, etc.). For example, 18th century chemist/philosopher Joseph Priestley’s conclusion that properties “termed mental” reduce to “the organical structure of the brain,” stated in different words by Locke, Hume, Darwin, and many others, and almost inescapable, it would seem, after the collapse of the mechanical philosophy that provided the foundations for early modern science, and its criteria of intelligibility.

    The last decade of the twentieth century was designated “the Decade of the Brain.” In introducing a collection of essays reviewing its results, neuroscientist Vernon Mountcastle formulated the guiding theme of the volume as the thesis of the new biology that “Things mental, indeed minds, are emergent properties of brains, [though] these emergences are…produced by principles that… we do not yet understand” – again reiterating eighteenth century insights in virtually the same words.

    The phrase “we do not yet understand,” however, should strike a note of caution. We might recall Bertrand Russell’s observation in 1927 that chemical laws “cannot at present be reduced to physical laws.” That was true, leading eminent scientists, including Nobel laureates, to regard chemistry as no more than a mode of computation that could predict experimental results, but not real science. Soon after Russell wrote, it was discovered that his observation, though correct, was understated. Chemical laws never would be reducible to physical laws, as physics was then understood. After physics underwent radical changes, with the quantum-theoretic revolution, the new physics was unified with a virtually unchanged chemistry, but there was never reduction in the anticipated sense.

    There may be some lessons here for neuroscience and philosophy of mind. Contemporary neuroscience is hardly as well-established as physics was a century ago. There are what seem to me to be cogent critiques of its foundational assumptions, notably recent work by cognitive neuroscientists C.R. Gallistel and Adam Philip King. The common slogan that study of mind is neuroscience at an abstract level might turn out to be just as misleading as comparable statements about chemistry and physics ninety years ago. Unification may take place, but that might require radical rethinking of the neurosciences, perhaps guided by computational theories of cognitive processes, as Gallistel and King suggest.

    The development of chemistry after Newton also has lessons for neuroscience and cognitive science. The 18th century chemist Joseph Black recommended that “chemical affinity be received as a first principle, which we cannot explain any more than Newton could explain gravitation, and let us defer accounting for the laws of affinity, till we have established such a body of doctrine as he has established concerning the laws of gravitation.” The course Black outlined is the one that was actually followed as chemistry proceeded to establish a rich body of doctrine. Historian of chemistry Arnold Thackray observes that the “triumphs” of chemistry were “built on no reductionist foundation but rather achieved in isolation from the newly emerging science of physics.” Interestingly, Thackray continues, Newton and his followers did attempt to “pursue the thoroughly Newtonian and reductionist task of uncovering the general mathematical laws which govern all chemical behavior” and to develop a principled science of chemical mechanisms based on physics and its concepts of interactions among “the ultimate permanent particles of matter.” But the Newtonian program was undercut by Dalton’s “astonishingly successful weight-quantification of chemical units,” Thackray continues, shifting “the whole area of philosophical debate among chemists from that of chemical mechanisms (the why? of reaction) to that of chemical units (the what? and how much?),” a theory that “was profoundly antiphysicalist and anti-Newtonian in its rejection of the unity of matter, and its dismissal of short-range forces.” Continuing, Thackray writes that “Dalton’s ideas were chemically successful. Hence they have enjoyed the homage of history, unlike the philosophically more coherent, if less successful, reductionist schemes of the Newtonians.”

    Adopting contemporary terminology, we might say that Dalton disregarded the “explanatory gap” between chemistry and physics by ignoring the underlying physics, much as post-Newtonian physicists disregarded the explanatory gap between Newtonian dynamics and the mechanical philosophy by rejecting the latter, and thereby tacitly lowering the goals of science in a highly significant way, as I mentioned.

    Contemporary studies of mind are deeply troubled by the “explanatory gap” between the science of mind and neuroscience – in particular, between computational theories of cognition, including language, and neuroscience. I think they would be well-advised to take seriously the history of chemistry. Today’s task is to develop a “body of doctrine” to explain what appear to be the critically significant phenomena of language and mind, much as chemists did. It is of course wise to keep the explanatory gap in mind, to seek ultimate unification, and to pursue what seem to be promising steps towards unification, while nevertheless recognizing that as often in the past, unification may not be reduction, but rather revision of what is regarded as the “fundamental discipline,” the reduction basis, the brain sciences in this case.

    Locke and Hume, and many less-remembered figures of the day, understood that much of the nature of the world is “inconceivable” to us. There were actually two different kinds of reasons for this. For Locke and Hume, the reasons were primarily epistemological. Hume in particular developed the idea that we can only be confident of immediate impressions, of “appearances.” Everything else is a mental construction. In particular, and of crucial significance, that is true of identity through time, problems that trace back to the pre-Socratics: the identity of a river or a tree or most importantly a person as they change through time. These are mental constructions; we cannot know whether they are properties of the world, a metaphysical reality. As Hume put the matter, we must maintain “a modest skepticism to a certain degree, and a fair confession of ignorance in subjects, that exceed all human capacity” – which for Hume includes virtually everything beyond appearances. We must “refrain from disquisitions concerning their real nature and operations.” It is the imagination that leads us to believe that we experience external continuing objects, including a mind or self. The imagination, furthermore, is “a kind of magical faculty in the soul, which…is inexplicable by the utmost efforts of human understanding,” so Hume argued.

    A different kind of reason why the nature of the world is inconceivable to us was provided by “the judicious Mr. Newton,” who apparently was not interested in the epistemological problems that vexed Locke and Hume. Newton scholar Andrew Janiak concludes that Newton regarded such global skepticism as “irrelevant – he takes the possibility of our knowledge of nature for granted.” For Newton, “the primary epistemic questions confronting us are raised by physical theory itself.” Locke and Hume, as I mentioned, took quite seriously the new science-based skepticism that resulted from Newton’s demolition of the mechanical philosophy, which had provided the very criterion of intelligibility for the scientific revolution. That is why Hume lauded Newton for having “restored [Nature’s] ultimate secrets to that obscurity, in which they ever did and ever will remain.”

    For these quite different kinds of reasons, the great figures of the scientific revolution and the Enlightenment believed that there are phenomena that fall beyond human understanding. Their reasoning seems to me substantial, and not easily dismissed. But contemporary doctrine is quite different. The conclusions are regarded as a dangerous heresy. They are derided as “the new mysterianism,” a term coined by philosopher Owen Flanagan, who defined it as “a postmodern position designed to drive a railroad spike through the heart of scientism.” Flanagan is referring specifically to explanation of consciousness, but the same concerns hold of mental processes in general.

    The “new mysterianism” is compared today with the “old mysterianism,” Cartesian dualism, its fate typically misunderstood. To repeat, Cartesian dualism was a perfectly respectable scientific doctrine, disproven by Newton, who exorcised the machine, leaving the ghost intact, contrary to what is commonly believed.

    The “new mysterianism,” I believe, is misnamed. It should be called “truism” — at least, for anyone who accepts the major findings of modern biology, which regards humans as part of the organic world. If so, then they will be like all other organisms in having a genetic endowment that enables them to grow and develop to their mature form. By simple logic, the endowment that makes this possible also excludes other paths of development. The endowment that yields scope also establishes limits. What enables us to grow legs and arms, and a mammalian visual system, prevents us from growing wings and having an insect visual system.

    All of this is indeed truism, and for non-mystics, the same should be expected to hold for cognitive capacities. We understand this well for other organisms. Thus we are not surprised to discover that rats are unable to run prime number mazes no matter how much training they receive; they simply lack the relevant concept in their cognitive repertoire. By the same token, we are not surprised that humans are incapable of the remarkable navigational feats of ants and bees; we simply lack the cognitive capacities, though we can sometimes duplicate their feats with sophisticated instruments. The truisms extend to higher mental faculties. For such reasons, we should, I think, be prepared to join the distinguished company of Newton, Locke, Hume and other dedicated mysterians.

    For accuracy, we should qualify the concept of “mysteries” by relativizing it to organisms. Thus what is a mystery for rats might not be a mystery for humans, and what is a mystery for humans is instinctive for ants and bees.

    Dismissal of mysterianism seems to me one illustration of a widespread form of dualism, a kind of epistemological and methodological dualism, which tacitly adopts the principle that study of mental aspects of the world should proceed in some fundamentally different way from study of what are considered physical aspects of the world, rejecting what are regarded as truisms outside the domain of mental processes. This new dualism seems to me truly pernicious, unlike Cartesian dualism, which was respectable science. The new methodological dualism, in contrast, seems to me to have nothing to recommend it.

    Far from bewailing the existence of mysteries-for-humans, we should be extremely grateful for it. With no limits to growth and development, our cognitive capacities would also have no scope. Similarly, if the genetic endowment imposed no constraints on growth and development of an organism it could become only a shapeless amoeboid creature, reflecting accidents of an unanalyzed environment, each quite unlike the next. Classical aesthetic theory recognized the same relation between scope and limits. Without rules, there can be no genuinely creative activity, even when creative work challenges and revises prevailing rules.

    Contemporary rejection of mysterianism – that is, truism – is quite widespread. One recent example that has received considerable attention is an interesting and informative book by physicist David Deutsch. He writes that potential progress is “unbounded” as a result of the achievements of the Enlightenment and early modern science, which directed science to the search for best explanations. As philosopher/physicist David Albert expounds his thesis, “with the introduction of that particular habit of concocting and evaluating new hypotheses, there was a sense in which we could do anything. The capacities of a community that has mastered that method to survive, and to learn, and to remake the world according to its inclinations, are (in the long run) literally, mathematically, infinite.”

    The quest for better explanations may well indeed be infinite, but infinite is of course not the same as limitless. English is infinite, but doesn’t include Greek. The integers are an infinite set, but do not include the reals. I cannot discern any argument here that addresses the concerns and conclusions of the great mysterians of the scientific revolution and the Enlightenment.

    We are left with a serious and challenging scientific inquiry: to determine the innate components of our cognitive nature in language, perception, concept formation, reflection, inference, theory construction, artistic creation, and all other domains of life, including the most ordinary ones. By pursuing this task we may hope to determine the scope and limits of human understanding, while recognizing that some differently structured intelligence might regard human mysteries as simple problems and wonder that we cannot find the answers, much as we can observe the inability of rats to run prime number mazes because of the very design of their cognitive nature.

    There is no contradiction in supposing that we might be able to probe the limits of human understanding and try to sharpen the boundary between problems that fall within our cognitive range and mysteries that do not. There are possible experimental inquiries. Another approach would be to take seriously the concerns of the great figures of the early scientific revolution and the Enlightenment: to pay attention to what they found “inconceivable,” and particularly their reasons. The “mechanical philosophy” itself has a claim to be an approximation to common sense understanding of the world, a suggestion that might be clarified by experimental inquiry. Despite much sophisticated commentary, it is also hard to escape the force of Descartes’s conviction that free will is “the noblest thing” we have, that “there is nothing we comprehend more evidently and more perfectly” and that “it would be absurd” to doubt something that “we comprehend intimately, and experience within ourselves” merely because it is “by its nature incomprehensible to us,” if indeed we do not “have intelligence enough” to understand the workings of mind, as he speculated. Concepts of determinacy and randomness fall within our intellectual grasp. But it might turn out that “free actions of men” cannot be accommodated in these terms, including the creative aspect of language and thought. If so, that might be a matter of cognitive limitations – which would not preclude an intelligible theory of such actions, far as this is from today’s scientific understanding.

    Honesty should lead us to concede, I think, that we understand little more today about these matters than the Spanish physician-philosopher Juan Huarte did 500 years ago when he distinguished the kind of intelligence humans shared with animals from the higher grade that humans alone possess and is illustrated in the creative use of language, and proceeding beyond that, from the still higher grade illustrated in true artistic and scientific creativity. Nor do we even know whether these are questions that lie within the scope of human understanding, or whether they fall among what Hume took to be Nature’s ultimate secrets, consigned to “that obscurity in which they ever did and ever will remain.”

    • Irie Zen 19th Mar 2019

      ***Insert***


      [..]


      FFF

  • Boulder Dash 9th Mar 2019

    Noam Actually Saying.

    https://www.theatlantic.com/technology/archive/2012/11/noam-chomsky-on-where-artificial-intelligence-went-wrong/261637/

  • Boulder Dash 9th Mar 2019

    I see. Returning to the point about Bayesian statistics in models of language and cognition. You've argued famously that speaking of the probability of a sentence is unintelligible on its own ...

    Chomsky: … Well you can get a number if you want, but it doesn't mean anything.

    It doesn't mean anything. But it seems like there's almost a trivial way to unify the probabilistic method with acknowledging that there are very rich internal mental representations, comprised of rules and other symbolic structures, and the goal of probability theory is just to link noisy sparse data in the world with these internal symbolic structures. And that doesn't commit you to saying anything about how these structures were acquired—they could have been there all along, or there partially with some parameters being tuned, whatever your conception is. But probability theory just serves as a kind of glue between noisy data and very rich mental representations.

    Chomsky: Well ... there's nothing wrong with probability theory, there's nothing wrong with statistics.

    But does it have a role?

    Chomsky: If you can use it, fine. But the question is what are you using it for? First of all, first question is, is there any point in understanding noisy data? Is there some point to understanding what's going on outside the window?

    Well, we are bombarded with it [noisy data], it's one of Marr's examples, we are faced with noisy data all the time, from our retina to ...

    Chomsky: That's true. But what he says is: Let's ask ourselves how the biological system is picking out of that noise things that are significant. The retina is not trying to duplicate the noise that comes in. It's saying I'm going to look for this, that and the other thing. And it's the same with say, language acquisition. The newborn infant is confronted with massive noise, what William James called "a blooming, buzzing confusion," just a mess. If say, an ape or a kitten or a bird or whatever is presented with that noise, that's where it ends. However, the human infants, somehow, instantaneously and reflexively, picks out of the noise some scattered subpart which is language-related. That's the first step. Well, how is it doing that? It's not doing it by statistical analysis, because the ape can do roughly the same probabilistic analysis. It's looking for particular things. So psycholinguists, neurolinguists, and others are trying to discover the particular parts of the computational system and of the neurophysiology that are somehow tuned to particular aspects of the environment. Well, it turns out that there actually are neural circuits which are reacting to particular kinds of rhythm, which happen to show up in language, like syllable length and so on. And there's some evidence that that's one of the first things that the infant brain is seeking—rhythmic structures. And going back to Gallistel and Marr, its got some computational system inside which is saying "okay, here's what I do with these things" and say, by nine months, the typical infant has rejected—eliminated from its repertoire—the phonetic distinctions that aren't used in its own language. So initially of course, any infant is tuned to any language. But say, a Japanese kid at nine months won't react to the R-L distinction anymore, that's kind of weeded out. So the system seems to sort out lots of possibilities and restrict it to just ones that are part of the language, and there's a narrow set of those. You can make up a non-language in which the infant could never do it, and then you're looking for other things. For example, to get into a more abstract kind of language, there's substantial evidence by now that such a simple thing as linear order, what precedes what, doesn't enter into the syntactic and semantic computational systems, they're just not designed to look for linear order. So you find overwhelmingly that more abstract notions of distance are computed and not linear distance, and you can find some neurophysiological evidence for this, too. Like if artificial languages are invented and taught to people, which use linear order, like you negate a sentence by doing something to the third word. People can solve the puzzle, but apparently the standard language areas of the brain are not activated—other areas are activated, so they're treating it as a puzzle not as a language problem. You need more work, but ...

  • Boulder Dash 9th Mar 2019

    Language has musical characteristics but it is not music.

    Music may have language characteristics but is not a language.

    A made up language based on linear order is not computed as a language proper because language is, internally, in the head, hierarchical in structure (“Instinctively, eagles that fly, swim.” The word ‘instinctively’ connects to swim, not the linearly closer word, fly.) It is exported to the listener linearly, via the appropriate physiology, but interpreted hierarchically, in the head. Hence why there is a problem with externalising language (as communication) because the mechanism for externalising internal language (connected to thought, which occurs in the head rapidly, with words and images of vague nature) was already there before the language faculty appeared...it is ancillary to the appearance of the language faculty mutation (language evolved suddenly in an individual...debate over this evolutionary idea is that many believe evolution works gradually bit by bit, denying or blinding one to the possibility of a major mutation occurring suddenly in an individual that manages to survive into the future through infected larger groups, giving advantages in certain environs...whether language has or has not been an advantage is a moot point). Hence externalisation may miss much of what the internal idea (thought) was as the linear process isn’t capable of capturing it in its complete form. Hence what is being interpreted by the interlocutor is also incomplete. This causes further problems, like Chinese whispers, down the track.

    Here we have confirmation of Godel’s incompleteness theorum showing up within everyday social relations of humans with severe capacity limitations, much like a cockroach has.

    And they (us, those tech guys, those smart fuckers) think they can create AGI...they don’t even have a coherent theory of intelligence yet (AI=Actual Idiot...AGI=Actual General Idiot)...yet here the fuckers are, Musk (See Iron Man 2...AI=Acting Idiot) and co, trying to sweep up the rights (IP) to further computing and engineering, the tech market, one that can make all those working in the field even more money than they already have, and hence power, by creating some bogus notion of artificial intelligence, some singularity, and accompanying dangers and future possibilities, and then, via popular books and talks and debates, suggesting that ordinary people, real idiots, the bewildered herd (see Edward Bernays) can actually have an impact on its future direction, while they secretly, or even explicitly perhaps, know that any future direction of technological progress will go on privately, behind closed doors (after sufficient public subsidy because the political system is under their control), in ways that accumulate, accumlate, accumulate, under the pretence of helping all humanity...one day robots will do all the drudge work...but in the meantime, dumb idiot fuckers (DIFs) can do it for peanuts and accompanying severe psychological impairment and disorders...then, when robots take over, we’ll throw everyone a basic income so they can at the very least survive until they can find some other lucrative vocation in order to get a few extra tickets to the fair, while the economic system, that made the basic income necessary in the first place, continues as is...or maybe we get Elysium.

    So, a made up language, based on linear order is worked out via a different part of the brain than the language faculty. Like it’s a puzzle or a code. However, a made up nonsense language, perhaps carrying no meaning, yet possessing the hierarchical structure of proper language, may be computed by the same faculty as proper language...whatever language may be. Colorless green ideas sleep furiously. [this whole paragraph must be subject to a severe scrute...in fact all of this post should be]

    Music is not a language even though many believe it to be like one. Free improvisation doesn’t even sound like music to most even though it may be played with instruments designed to play proper music. It sounds like noise. Basic necessary parameters that allow for one to distinguish certain incoming data as proper music, from the total of incoming noise have been broken that make the discernment of it as music, highly improbable or not possible. Perhaps free improvisation must be understood using some other part of the brain other than that which computes music. Definitions of music as ordered sound or that it can be anything you want it to be still leaves music undefined. Yet, most people seem to know, intuitively, what music is and what it isn’t, like the noise of free improvisation.

    It seems, in this sense music is similar to language. Something that can be picked out from all the incoming noise intuitively. Whereas free improvisation is like a made up language that cannot be computed by the faculty that knows what music is, it must be computed by some other faculty.

    So free improvisors may not be musicians even though many/most, if not all, wish to be perceived as such. Perhaps they are failed musicians. But it is true that many proper musicians want them off their turf or allow them entry for short bursts of time, when even the proper musicians themselves may indulge in such nonsense as a kind of fun cathartic humorous thing to do, before reverting to serious proper musician mode and proving their true status as such by alerting the audience to the fact that they can actually play Hot For Teacher or Women In Uniform.

    For AI to remotely resemble something intelligent, surely it must be capable of being able to hold a conversation with itself, not just with others, going around and around and off on tangents, laughing at itself, it’s stupidity, it’s idiocy, it’s pointiessness, as it goes and recognising here and there the odd reasonable and possibly poignant point, over a period of time. Like I do all the friggin’ time....or with others, like when a muso friend of mine tells me over a coffee his AS theory...AS meaning Anti-Scale. Or when another tries to convince me that music can be broken (MCBB)...something I completely disagree with...or when I ask someone what music is and they say, music can be anything you want (MCBAYW), prompting me to ask again, yes, but what is ‘music’...what makes anything you like, music?

    If something like a robot or computer or whatever it is, this AI, cannot be so ridiculous in conversation, so pointless in its thoughts and ideas, for fun and mere entertainment, perhaps even just for a laugh, like the above stupidity...then fuck it...it ain’t intelligent and it isn’t worth trusting.

    Ain’t worth trusting! Much like the motivations of those promoting AI and the accompanying wonders and dangers. What are they really? Huh?


  • Boulder Dash 10th Mar 2019

    Noam Chomsky, Linguistics and AI

    Rebecca WickerFollow

    Jan 11, 2017

    After my first Future Technologies & Innovation meeting at Pearson, and the issues it raised regarding AI, I was led to The Atlantic’s 2012 article on Noam Chomsky ‘On Where Artificial Intelligence Went Wrong’. It is quite dense and covers a variety of topics from AI and linguistics to the evolution of language and biological systems, so thought I would write up a sort of summary of the AI and linguistics conversation, as well as my asides — I will try and keep it brief. Also, bear in mind, I am coming at this from a linguistic background and not from a technology point of view. In the spirit of this article, I believe it is good to embrace and learn from all the disciplines rather than just thinking of your own, so I hope to provide a different insight into something that is seen as otherwise technical and mathematical — and I’m very welcome to comments about anything I have written, as I do not claim to be an expert in AI either.

    The origin of language is a hotly debated topic in linguistics. Dominated by Noam Chomsky, his ‘Universal Grammar’ is the little black box where he thinks human being’s innate ability ‘to language’ comes from. He believes there’s a part of our brain where all our language is stored, from birth, and it unveils itself bit by bit as we grow older. There are language structures that we learn, little by little, that grow in complex structures until we’re finally on the language bike, no stabilisers, no one holding the bike steady, and maybe even ‘look Mum, no hands!’. This is far from how we originally thought we acquired language: mimicking (from the Behaviourist theory of monkey-see-monkey-do).

    So where does this fit into AI? According to Chomsky, we’re producing AI in a behaviourist way, not in a Universal Grammar way.

    “For Chomsky, the ‘new AI’ — focused on using statistical learning techniques to better mine and predict data — is likely to yield general principles about the nature of intelligent beings or about cognition.”

    Chomsky’s quote above made me think of Alexa and Google Home; my household is now the proud owner of both (ah, the Holidays). When I ask something too complicated, outside her ‘stream of consciousness’, I get an “I am unable to answer that” — is it because I haven’t given her what she needs, or that she hasn’t been programmed to ‘think’ of that? Is she only able to mimic what I ask because she’s repeating something she’s done before? It’s the “I’m always learning” message I get from Crowdfire. Crowdfire can take all of my Twitter data and tell me what I should do based on what it was told was good to do, but it can’t do anything outside of its input thus only being able to directly mimic from input = behaviourism. And that’s not ‘intelligence’ to Chomsky, it’s just parroting.

    Furthermore, Chomsky basically believes that language is too complex for us to understand and that neuroscience has been on the wrong track for the past couple of hundred years. Yikes. He thinks we’re maybe “looking in the wrong place” which, I think, is a valid notion when it comes to research and experiments. I remember this being explained, very simply, as poor construct validity. If, for example, you gave a fifth grade class a maths test in English in the Bronx — you may intend on testing their maths skills, but on some level (depending on each student) you’re also testing their language proficiency — if they don’t fully understand the language in which the question is written, is it because they don’t have the maths skills or the language skills? So when you’re testing their maths skills and they don’t answer, or ‘guess’ and get it wrong, you could distinctively say that their maths skills are low while not taking into account the language variable. Can we ever take into account every variable that changes, though? Is Monday always awful, or does the snow and ice in NYC and gloomy faces on the subway make it that way?

    Chomsky likens this to getting rid of a physics department and replacing it with endless numbers of video tapes of what’s happening outside the video. After shoving all the data in a ginormous machine capable to analysing gargantuan volumes of data, you should expect to get a prediction of what’s happening next — you may even get a better prediction than what the physics department will ever give. At this point, success is defined as “getting a fair approximation to a mass of chaotic unanalysed data…but you won’t get the kind of understanding that the sciences have always been aimed at — what you’ll get is an approximation of what’s happening”. He says this is like analysing the weather — you can guess based on probability, statistics and assumptions, but in the end, that is not what meteorologists do: they get to the bottom of how it works.

    He doesn’t disregard statistics or probability but he asks a valid question:

    “If you can use it, fine. But the question is what are you using it for? Is there any point in understanding noisy data? Is there some point to understanding what’s going on outside the window [of the video]?”(Chomsky)

    It’s true that we do not acquire a language by sifting through data — there is trial and error, self-correction, inferences etc. and so why are we trying to create AI in the same way? By churning away at the numbers game and never-ending data analysis, according to Chomsky, we’re never going to get closer to understanding how language works. But is that ok? As long as AI can give us decent output, do we need them to sound human?

    It almost echos what Larry Berger spoke about during NY EdTech Week about how we have SO much data and don’t completely understand what to do with it because we haven’t advanced as much as we’d like to. I felt like his ideas were very much ‘bursting the bubble’ of the tech enthusiast that thinks tech can solve all the world’s problem — possibly, but not yet. Essentially, “you can easily be misled by experiments that seem to work because you don’t know enough about what you’re looking for.”


    You’re only as good as the company you keep, and Chomsky is no exception.

    His other friend, Dr David Marr defined a general framework for studying complex biological systems (see above) that Chomsky is a big fan of. With this in mind, Chomsky debates whether language even has an algorithm since language is acquired and produced so arbitrarily, and that by using statistical inference algorithms we’re still not getting to the crux of how/where language works and thus dealing with statistical fluff that isn’t so reliable.

    “If you get more and more data, and better and better statistics, you can get a better and better approximation to some immense corpus of text, like everything The Wall Street Journal archives — but you learn nothing about language.”(Chomsky)

    Chomsky’s argument, using Marr’s framework, is that there is no algorithm for the system of language itself. There are for carrying out the processes of the cognitive system but that’s as far as it goes — language is comparable to biological systems, like the immune system, it’s vastly complex and can’t be inferred from statistics and data alone. Simply put, you can’t reduce the brain processes of language to a computer.

    As a linguist, I’ve had a love-hate relationship with Noam, especially since he came up a lot in my Political Science BA, to me he was a jack-of-all-trades (but brilliant at all). I’ve never totally understood how the black box came to be…you can’t find it, describe it or know how it works — like someone magically dropped it in your brain. I do, however, like the fact that he (inevitably) talks about Turing, pulling on my nostalgic heartstrings since Alan Turing went to my school and The Imitation Game was filmed in the village (and school) I grew up in (limestone arches in all their glory). Turing, to Chomsky, was able to find the simplest of Marr’s formula — the computational aspect of language, such as ‘read’ and ‘write’ — but “you’ve got to start by looking at what’s there and what’s working and you see that from Marr’s highest level [implementational]”. If, indeed, AI can never be ‘intelligent’ without knowing the inner workings and mystery of Universal Grammar and the processes of language, has AI ‘gone wrong’?

    A. M. Turing (1950) Computing Machinery and Intelligence. Mind 49: 433-460.

    https://www.csee.umbc.edu/courses/471/papers/turing.pdf

    “6. Contrary Views on the Main Question

    We may now consider the ground to have been cleared and we are ready to proceed to the debate on our question, "Can machines think?" and the variant of it quoted at the end of the last section. We cannot altogether abandon the original form of the problem, for opinions will differ as to the appropriateness of the substitution and we must at least listen to what has to be said in this connexion.

    It will simplify matters for the reader if I explain first my own beliefs in the matter. Consider first the more accurate form of the question. I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning. The original question, "Can machines think?" I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. I believe further that no useful purpose is served by concealing these beliefs. The popular view that scientists proceed inexorably from well-established fact to well-established fact, never being influenced by any improved conjecture, is quite mistaken. Provided it is made clear which are proved facts and which are conjectures, no harm can result. Conjectures are of great importance since they suggest useful lines of research.” (Turing)

    Noam likes to quote this bit,

    ““The original question, "Can machines think?" I believe to be too meaningless to deserve discussion. ”

  • Boulder Dash 10th Mar 2019

  • Boulder Dash 11th Mar 2019

    https://m.youtube.com/watch?v=rHKwIYsPXLg

  • Boulder Dash 11th Mar 2019

    On Chomsky and the Two Cultures of Statistical Learning
    At the Brains, Minds, and Machines symposium held during MIT's 150th birthday party, Technology Review reports that Prof. Noam Chomsky
    derided researchers in machine learning who use purely statistical methods to produce behavior that mimics something in the world, but who don't try to understand the meaning of that behavior.

    The transcript is now available, so let's quote Chomsky himself:

    It's true there's been a lot of work on trying to apply statistical models to various linguistic problems. I think there have been some successes, but a lot of failures. There is a notion of success ... which I think is novel in the history of science. It interprets success as approximating unanalyzed data.

    This essay discusses what Chomsky said, speculates on what he might have meant, and tries to determine the truth and importance of his claims.

    Chomsky's remarks were in response to Steven Pinker's question about the success of probabilistic models trained with statistical methods.
    1 What did Chomsky mean, and is he right?
    2 What is a statistical model?
    3 How successful are statistical language models?
    4 Is there anything like their notion of success in the history of science?
    5 What doesn't Chomsky like about statistical models?

    What did Chomsky mean, and is he right?

    I take Chomsky's points to be the following:
    A Statistical language models have had engineering success, but that is irrelevant to science.
    B Accurately modeling linguistic facts is just butterfly collecting; what matters in science (and specifically linguistics) is the underlying principles.
    C Statistical models are incomprehensible; they provide no insight.
    D Statistical models may provide an accurate simulation of some phenomena, but the simulation is done completely the wrong way; people don't decide what the third word of a sentence should be by consulting a probability table keyed on the previous two words, rather they map from an internal semantic form to a syntactic tree-structure, which is then linearized into words. This is done without any probability or statistics.
    E Statistical models have been proven incapable of learning language; therefore language must be innate, so why are these statistical modelers wasting their time on the wrong enterprise?
    Is he right? That's a long-standing debate. These are my answers:
    A I agree that engineering success is not the goal or the measure of science. But I observe that science and engineering develop together, and that engineering success shows that something is working right, and so is evidence (but not proof) of a scientifically successful model.
    B Science is a combination of gathering facts and making theories; neither can progress on its own. I think Chomsky is wrong to push the needle so far towards theory over facts; in the history of science, the laborious accumulation of facts is the dominant mode, not a novelty. The science of understanding language is no different than other sciences in this respect.
    C I agree that it can be difficult to make sense of a model containing billions of parameters. Certainly a human can't understand such a model by inspecting the values of each parameter individually. But one can gain insight by examing the properties of the model—where it succeeds and fails, how well it learns as a function of data, etc.
    D I agree that a Markov model of word probabilities cannot model all of language. It is equally true that a concise tree-structure model without probabilities cannot model all of language. What is needed is a probabilistic model that covers words, trees, semantics, context, discourse, etc. Chomsky dismisses all probabilistic models because of shortcomings of particular 50-year old models. I understand how Chomsky arrives at the conclusion that probabilistic models are unnecessary, from his study of the generation of language. But the vast majority of people who study interpretation tasks, such as speech recognition, quickly see that interpretation is an inherently probabilistic problem: given a stream of noisy input to my ears, what did the speaker most likely mean? Einstein said to make everything as simple as possible, but no simpler. Many phenomena in science are stochastic, and the simplest model of them is a probabilistic model; I believe language is such a phenomenon and therefore that probabilistic models are our best tool for representing facts about language, for algorithmically processing language, and for understanding how humans process language.
    E In 1967, Gold's Theorem showed some theoretical limitations of logical deduction on formal mathematical languages. But this result has nothing to do with the task faced by learners of natural language. In any event, by 1969 we knew that probabilistic inference (over probabilistic context-free grammars) is not subject to those limitations (Horning showed that learning of PCFGs is possible). I agree with Chomsky that it is undeniable that humans have some innate capability to learn natural language, but we don't know enough about that capability to rule out probabilistic language representations, nor statistical learning. I think it is much more likely that human language learning involves something like probabilistic and statistical inference, but we just don't know yet.
    Now let me back up my answers with a more detailed look at the remaining questions.

    What is a statistical model?

    A statistical model is a mathematical model which is modified or trained by the input of data points. Statistical models are often but not always probabilistic. Where the distinction is important we will be careful not to just say "statistical" but to use the following component terms:
    • A mathematical model specifies a relation among variables, either in functional form that maps inputs to outputs (e.g. y = m x + b) or in relation form (e.g. the following (x, y) pairs are part of the relation).
    • A probabilistic model specifies a probability distribution over possible values of random variables, e.g., P(x, y), rather than a strict deterministic relationship, e.g., y = f(x).
    • A trained model uses some training/learning algorithm to take as input a collection of possible models and a collection of data points (e.g. (x, y) pairs) and select the best model. Often this is in the form of choosing the values of parameters (such as m and b above) through a process of statistical inference.

    For example, a decade before Chomsky, Claude Shannon proposed probabilistic models of communication based on Markov chains of words. If you have a vocabulary of 100,000 words and a second-order Markov model in which the probability of a word depends on the previous two words, then you need a quadrillion (1015) probability values to specify the model. The only feasible way to learn these 1015 values is to gather statistics from data and introduce some smoothing method for the many cases where there is no data.

    Therefore, most (but not all) probabilistic models are trained. Also, many (but not all) trained models are probabilistic.

    As another example, consider the Newtonian model of gravitational attraction, which says that the force between two objects of mass m1 and m2 a distance r apart is given by

    F = G m1 m2 / r2

    where G is the universal gravitational constant. This is a trained model because the gravitational constant G is determined by statistical inference over the results of a series of experiments that contain stochastic experimental error. It is also a deterministic (non-probabilistic) model because it states an exact functional relationship. I believe that Chomsky has no objection to this kind of statistical model. Rather, he seems to reserve his criticism for statistical models like Shannon's that have quadrillions of parameters, not just one or two.
    (This example brings up another distinction: the gravitational model is continuous and quantitative whereas the linguistic tradition has favored models that are discrete, categorical, and qualitative: a word is or is not a verb, there is no question of its degree of verbiness. For more on these distinctions, see Chris Manning's article on Probabilistic Syntax.)

    A relevant probabilistic statistical model is the ideal gas law, which describes the pressure P of a gas in terms of the the number of molecules, N, the temperature T, and Boltzmann's constant, K:

    P = N k T / V.

    The equation can be derived from first principles using the tools of statistical mechanics. It is an uncertain, incorrect model; the true model would have to describe the motions of individual gas molecules. This model ignores that complexity and summarizes our uncertainty about the location of individual molecules. Thus, even though it is statistical and probabilistic, even though it does not completely model reality, it does provide both good predictions and insight—insight that is not available from trying to understand the true movements of individual molecules.

    Now let's consider the non-statistical model of spelling expressed by the rule "I before E except after C." Compare that to the probabilistic, trained statistical model:

    P(IE) = 0.0177 P(CIE) = 0.0014 P(*IE) = 0.163
    P(EI) = 0.0046 P(CEI) = 0.0005 P(*EI) = 0.0041

    This model comes from statistics on a corpus of a trillion words of English text. The notation P(IE) is the probability that a word sampled from this corpus contains the consecutive letters "IE." P(CIE) is the probability that a word contains the consecutive letters "CIE", and P(*IE) is the probability of any letter other than C followed by IE. The statistical data confirms that IE is in fact more common than EI, and that the dominance of IE lessens wehn following a C, but contrary to the rule, CIE is still more common than CEI. Examples of "CIE" words include "science," "society," "ancient" and "species." The disadvantage of the "I before E except after C" model is that it is not very accurate.

    Consider:

    Accuracy("I before E") = 0.0177/(0.0177+0.0046) = 0.793
    Accuracy("I before E except after C") = (0.0005+0.0163)/(0.0005+0.0163+0.0014+0.0041) = 0.753

    A more complex statistical model (say, one that gave the probability of all 4-letter sequences, and/or of all known words) could be ten times more accurate at the task of spelling, but offers little insight into what is going on. (Insight would require a model that knows about phonemes, syllabification, and language of origin. Such a model could be trained (or not) and probabilistic (or not).)

    As a final example (not of statistical models, but of insight), consider the Theory of Supreme Court Justice Hand-Shaking: when the supreme court convenes, all attending justices shake hands with every other justice. The number of attendees, n, must be an integer in the range 0 to 9; what is the total number of handshakes, h for a given n? Here are three possible explanations:
    A Each of n justices shakes hands with the other n - 1 justices, but that counts Alito/Breyer and Breyer/Alito as two separate shakes, so we should cut the total in half, and we end up with h = n × (n - 1) / 2.
    B To avoid double-counting, we will order the justices by seniority and only count a more-senior/more-junior handshake, not a more-junior/more-senior one. So we count, for each justice, the shakes with the more junior justices, and sum them up, giving h = Σi = 1 .. n (i - 1).
    C Just look at this table:
    D n:
    E 0
    F 1
    G 2
    H 3
    I 4
    J 5
    K 6
    L 7
    M 8
    N 9
    O h:
    P 0
    Q 0
    R 1
    S 3
    T 6
    U 10
    V 15
    W 21
    X 28
    Y 36

    Some people might prefer A, some might prefer B, and if you are slow at doing multiplication or addition you might prefer C. Why? All three explanations describe exactly the same theory — the same function from n to h, over the entire domain of possible values of n. Thus we could prefer A (or B) over C only for reasons other than the theory itself. We might find that A or B gave us a better understanding of the problem. A and B are certainly more useful than C for figuring out what happens if Congress exercises its power to add an additional associate justice. Theory A might be most helpful in developing a theory of handshakes at the end of a hockey game (when each player shakes hands with players on the opposing team) or in proving that the number of people who shook an odd number of hands at the MIT Symposium is even.

    How successful are statistical language models?

    Chomsky said words to the effect that statistical language models have had some limited success in some application areas. Let's look at computer systems that deal with language, and at the notion of "success" defined by "making accurate predictions about the world." First, the major application areas:
    • Search engines: 100% of major players are trained and probabilistic. Their operation cannot be described by a simple function.
    • Speech recognition: 100% of major systems are trained and probabilistic, mostly relying on probabilistic hidden Markov models.
    • Machine translation: 100% of top competitors in competitions such as NIST use statistical methods. Some commercial systems use a hybrid of trained and rule-based approaches. Of the 4000 language pairs covered by machine translation systems, a statistical system is by far the best for every pair except Japanese-English, where the top statistical system is roughly equal to the top hybrid system.
    • Question answering: this application is less well-developed, and many systems build heavily on the statistical and probabilistic approach used by search engines. The IBM Watson system that recently won on Jeopardy is thoroughly probabilistic and trained, while Boris Katz's START is a hybrid. All systems use at least some statistical techniques.
    Now let's look at some components that are of interest only to the computational linguist, not to the end user:
    • Word sense disambiguation: 100% of top competitors at the SemEval-2 competition used statistical techniques; most are probabilistic; some use a hybrid approach incorporating rules from sources such as Wordnet.
    • Coreference resolution: The majority of current systems are statistical, although we should mention the system of Haghighi and Klein, which can be described as a hybrid system that is mostly rule-based rather than trained, and performs on par with top statistical systems.
    • Part of speech tagging: Most current systems are statistical. The Brill tagger stands out as a successful hybrid system: it learns a set of deterministic rules from statistical data.
    • Parsing: There are many parsing systems, using multiple approaches. Almost all of the most successful are statistical, and the majority are probabilistic (with a substantial minority of deterministic parsers).
    Clearly, it is inaccurate to say that statistical models (and probabilistic models) have achieved limited success; rather they have achieved a dominant (although not exclusive) position.

    Another measure of success is the degree to which an idea captures a community of researchers. As Steve Abney wrote in 1996, "In the space of the last ten years, statistical methods have gone from being virtually unknown in computational linguistics to being a fundamental given. ... anyone who cannot at least use the terminology persuasively risks being mistaken for kitchen help at the ACL [Association for Computational Linguistics] banquet."

    Now of course, the majority doesn't rule -- just because everyone is jumping on some bandwagon, that doesn't make it right. But I made the switch: after about 14 years of trying to get language models to work using logical rules, I started to adopt probabilistic approaches (thanks to pioneers like Gene Charniak (and Judea Pearl for probability in general) and to my colleagues who were early adopters, like Dekai Wu). And I saw everyone around me making the same switch. (And I didn't see anyone going in the other direction.) We all saw the limitations of the old tools, and the benefits of the new.

    And while it may seem crass and anti-intellectual to consider a financial measure of success, it is worth noting that the intellectual offspring of Shannon's theory create several trillion dollars of revenue each year, while the offspring of Chomsky's theories generate well under a billion.
    This section has shown that one reason why the vast majority of researchers in computational linguistics use statistical models is an engineering reason: statistical models have state-of-the-art performance, and in most cases non-statistical models perform worst. For the remainder of this essay we will concentrate on scientific reasons: that probabilistic models better represent linguistic facts, and statistical techniques make it easier for us to make sense of those facts.

    Is there anything like [the statistical model] notion of success in the history of science?

    When Chomsky said "That's a notion of [scientific] success that's very novel. I don't know of anything like it in the history of science" he apparently meant that the notion of success of "accurately modeling the world" is novel, and that the only true measure of success in the history of science is "providing insight" — of answering why things are the way they are, not just describing how they are.

    A dictionary definition of science is "the systematic study of the structure and behavior of the physical and natural world through observation and experiment," which stresses accurate modeling over insight, but it seems to me that both notions have always coexisted as part of doing science. To test that, I consulted the epitome of doing science, namely Science. I looked at the current issue and chose a title and abstract at random:

    Chlorinated Indium Tin Oxide Electrodes with High Work Function for Organic Device Compatibility

    In organic light-emitting diodes (OLEDs), a stack of multiple organic layers facilitates charge flow from the low work function [~4.7 electron volts (eV)] of the transparent electrode (tin-doped indium oxide, ITO) to the deep energy levels (~6 eV) of the active light-emitting organic materials. We demonstrate a chlorinated ITO transparent electrode with a work function of >6.1 eV that provides a direct match to the energy levels of the active light-emitting materials in state-of-the art OLEDs. A highly simplified green OLED with a maximum external quantum efficiency (EQE) of 54% and power efficiency of 230 lumens per watt using outcoupling enhancement was demonstrated, as were EQE of 50% and power efficiency of 110 lumens per watt at 10,000 candelas per square meter.

    It certainly seems that this article is much more focused on "accurately modeling the world" than on "providing insight." The paper does indeed fit in to a body of theories, but it is mostly reporting on specific experiments and the results obtained from them (e.g. efficiency of 54%).

    I then looked at all the titles and abstracts from the current issue of Science:
    • Comparative Functional Genomics of the Fission Yeasts
    • Dimensionality Control of Electronic Phase Transitions in Nickel-Oxide Superlattices
    • Competition of Superconducting Phenomena and Kondo Screening at the Nanoscale
    • Chlorinated Indium Tin Oxide Electrodes with High Work Function for Organic Device Compatibility
    • Probing Asthenospheric Density, Temperature, and Elastic Moduli Below the Western United States
    • Impact of Polar Ozone Depletion on Subtropical Precipitation
    • Fossil Evidence on Origin of the Mammalian Brain
    • Industrial Melanism in British Peppered Moths Has a Singular and Recent Mutational Origin
    • The Selaginella Genome Identifies Genetic Changes Associated with the Evolution of Vascular Plants
    • Chromatin "Prepattern" and Histone Modifiers in a Fate Choice for Liver and Pancreas
    • Spatial Coupling of mTOR and Autophagy Augments Secretory Phenotypes
    • Diet Drives Convergence in Gut Microbiome Functions Across Mammalian Phylogeny and Within Humans
    • The Toll-Like Receptor 2 Pathway Establishes Colonization by a Commensal of the Human Microbiota
    • A Packing Mechanism for Nucleosome Organization Reconstituted Across a Eukaryotic Genome
    • Structures of the Bacterial Ribosome in Classical and Hybrid States of tRNA Binding
    and did the same for the current issue of Cell:
    • Mapping the NPHP-JBTS-MKS Protein Network Reveals Ciliopathy Disease Genes and Pathways
    • Double-Strand Break Repair-Independent Role for BRCA2 in Blocking Stalled Replication Fork Degradation by MRE11
    • Establishment and Maintenance of Alternative Chromatin States at a Multicopy Gene Locus
    • An Epigenetic Signature for Monoallelic Olfactory Receptor Expression
    • Distinct p53 Transcriptional Programs Dictate Acute DNA-Damage Responses and Tumor Suppression
    • An ADIOL-ERβ-CtBP Transrepression Pathway Negatively Regulates Microglia-Mediated Inflammation
    • A Hormone-Dependent Module Regulating Energy Balance
    • Class IIa Histone Deacetylases Are Hormone-Activated Regulators of FOXO and Mammalian Glucose Homeostasis
    and for the 2010 Nobel Prizes in science:
    • Physics: for groundbreaking experiments regarding the two-dimensional material graphene
    • Chemistry: for palladium-catalyzed cross couplings in organic synthesis
    • Physiology or Medicine: for the development of in vitro fertilization

    My conclusion is that 100% of these articles and awards are more about "accurately modeling the world" than they are about "providing insight," although they all have some theoretical insight component as well. I recognize that judging one way or the other is a difficult ill-defined task, and that you shouldn't accept my judgements, because I have an inherent bias. (I was considering running an experiment on Mechanical Turk to get an unbiased answer, but those familiar with Mechanical Turk told me these questions are probably too hard. So you the reader can do your own experiment and see if you agree.)

    What doesn't Chomsky like about statistical models?

    I said that statistical models are sometimes confused with probabilistic models; let's first consider the extent to which Chomsky's objections are actually about probabilistic models. In 1969 he famously wrote:

    But it must be recognized that the notion of "probability of a sentence" is an entirely useless one, under any known interpretation of this term.
    His main argument being that, under any interpretation known to him, the probability of a novel sentence must be zero, and since novel sentences are in fact generated all the time, there is a contradiction. The resolution of this contradiction is of course that it is not necessary to assign a probability of zero to a novel sentence; in fact, with current probabilistic models it is well-known how to assign a non-zero probability to novel occurrences, so this criticism is invalid, but was very influential for decades. Previously, in Syntactic Structures (1957) Chomsky wrote:

    I think we are forced to conclude that ... probabilistic models give no particular insight into some of the basic problems of syntactic structure.

    In the footnote to this conclusion he considers the possibility of a useful probabilistic/statistical model, saying "I would certainly not care to argue that ... is unthinkable, but I know of no suggestion to this effect that does not have obvious flaws." The main "obvious flaw" is this: Consider:
    1 I never, ever, ever, ever, ... fiddle around in any way with electrical equipment.
    2 She never, ever, ever, ever, ... fiddles around in any way with electrical equipment.
    3 * I never, ever, ever, ever, ... fiddles around in any way with electrical equipment.
    4 * She never, ever, ever, ever, ... fiddle around in any way with electrical equipment.

    No matter how many repetitions of "ever" you insert, sentences 1 and 2 are grammatical and 3 and 4 are ungrammatical. A probabilistic Markov-chain model with n states can never make the necessary distinction (between 1 or 2 versus 3 or 4) when there are more than n copies of "ever." Therefore, a probabilistic Markov-chain model cannot handle all of English.

    This criticism is correct, but it is a criticism of Markov-chain models—it has nothing to do with probabilistic models (or trained models) at all. Moreover, since 1957 we have seen many types of probabilistic language models beyond the Markov-chain word models. Examples 1-4 above can in fact be distinguished with a finite-state model that is not a chain, but other examples require more sophisticated models. The best studied is probabilistic context-free grammar (PCFG), which operates over trees, categories of words, and individual lexical items, and has none of the restrictions of finite-state models. We find that PCFGs are state-of-the-art for parsing performance and are easier to learn from data than categorical context-free grammars. Other types of probabilistic models cover semantic and discourse structures. Every probabilistic model is a superset of a deterministic model (because the deterministic model could be seen as a probabilistic model where the probabilities are restricted to be 0 or 1), so any valid criticism of probabilistic models would have to be because they are too expressive, not because they are not expressive enough.

    In Syntactic Structures, Chomsky introduces a now-famous example that is another criticism of finite-state probabilistic models:

    Neither (a) 'colorless green ideas sleep furiously' nor (b) 'furiously sleep ideas green colorless', nor any of their parts, has ever occurred in the past linguistic experience of an English speaker. But (a) is grammatical, while (b) is not.
    Chomsky appears to be correct that neither sentence appeared in the published literature before 1955. I'm not sure what he meant by "any of their parts," but certainly every two-word part had occurred, for example:
    • "It is neutral green, colorless green, like the glaucous water lying in a cellar." The Paris we remember, Elisabeth Finley Thomas (1942).
    • "To specify those green ideas is hardly necessary, but you may observe Mr. [D. H.] Lawrence in the role of the satiated aesthete." The New Republic: Volume 29 p. 184, William White (1922).
    • "Ideas sleep in books." Current Opinion: Volume 52, (1912).

    But regardless of what is meant by "part," a statistically-trained finite-state model can in fact distinguish between these two sentences. Pereira (2001) showed that such a model, augmented with word categories and trained by expectation maximization on newspaper text, computes that (a) is 200,000 times more probable than (b). To prove that this was not the result of Chomsky's sentence itself sneaking into newspaper text, I repeated the experiment, using a much cruder model with Laplacian smoothing and no categories, trained over the Google Book corpus from 1800 to 1954, and found that (a) is about 10,000 times more probable. If we had a probabilistic model over trees as well as word sequences, we could perhaps do an even better job of computing degree of grammaticality.

    Furthermore, the statistical models are capable of delivering the judgment that both sentences are extremely improbable, when compared to, say, "Effective green products sell well." Chomsky's theory, being categorical, cannot make this distinction; all it can distinguish is grammatical/ungrammatical.

    Another part of Chomsky's objection is "we cannot seriously propose that a child learns the values of 109 parameters in a childhood lasting only 108 seconds." (Note that modern models are much larger than the 109 parameters that were contemplated in the 1960s.) But of course nobody is proposing that these parameters are learned one-by-one; the right way to do learning is to set large swaths of near-zero parameters simultaneously with a smoothing or regularization procedure, and update the high-probability parameters continuously as observations comes in. And noone is suggesting that Markov models by themselves are a serious model of human language performance. But I (and others) suggest that probabilistic, trained models are a better model of human language performance than are categorical, untrained models. And yes, it seems clear that an adult speaker of English does know billions of language facts (for example, that one says "big game" rather than "large game" when talking about an important football game). These facts must somehow be encoded in the brain.

    It seems clear that probabilistic models are better for judging the likelihood of a sentence, or its degree of sensibility. But even if you are not interested in these factors and are only interested in the grammaticality of sentences, it still seems that probabilistic models do a better job at describing the linguistic facts. The mathematical theory of formal languages defines a language as a set of sentences. That is, every sentence is either grammatical or ungrammatical; there is no need for probability in this framework. But natural languages are not like that. A scientific theory of natural languages must account for the many phrases and sentences which leave a native speaker uncertain about their grammaticality (see Chris Manning's article and its discussion of the phrase "as least as"), and there are phrases which some speakers find perfectly grammatical, others perfectly ungrammatical, and still others will flip-flop from one occasion to the next. Finally, there are usages which are rare in a language, but cannot be dismissed if one is concerned with actual data. For example, the verb quake is listed as intransitive in dictionaries, meaning that (1) below is grammatical, and (2) is not, according to a categorical theory of grammar.
    1 The earth quaked.
    2 ? It quaked her bowels.

    But (2) actually appears as a sentence of English. This poses a dilemma for the categorical theory. When (2) is observed we must either arbitrarily dismiss it as an error that is outside the bounds of our model (without any theoretical grounds for doing so), or we must change the theory to allow (2), which often results in the acceptance of a flood of sentences that we would prefer to remain ungrammatical. As Edward Sapir said in 1921, "All grammars leak." But in a probabilistic model there is no difficulty; we can say that quake has a high probability of being used intransitively, and a low probability of transitive use (and we can, if we care, further describe those uses through subcategorization).

    Steve Abney points out that probabilistic models are better suited for modeling language change. He cites the example of a 15th century Englishman who goes to the pub every day and orders "Ale!" Under a categorical model, you could reasonably expect that one day he would be served eel, because the great vowel shift flipped a Boolean parameter in his mind a day before it flipped the parameter in the publican's. In a probabilistic framework, there will be multiple parameters, perhaps with continuous values, and it is easy to see how the shift can take place gradually over two centuries.

    Thus it seems that grammaticality is not a categorical, deterministic judgment but rather an inherently probabilistic one. This becomes clear to anyone who spends time making observations of a corpus of actual sentences, but can remain unknown to those who think that the object of study is their own set of intuitions about grammaticality. Both observation and intuition have been used in the history of science, so neither is "novel," but it is observation, not intuition that is the dominant model for science.

    Now let's consider what I think is Chomsky's main point of disagreement with statistical models: the tension between "accurate description" and "insight."

    This is an old distinction. Charles Darwin (biologist, 1809–1882) is best known for his insightful theories but he stressed the importance of accurate description, saying "False facts are highly injurious to the progress of science, for they often endure long; but false views, if supported by some evidence, do little harm, for every one takes a salutary pleasure in proving their falseness."

    More recently, Richard Feynman (physicist, 1918–1988) wrote "Physics can progress without the proofs, but we can't go on without the facts."

    On the other side, Ernest Rutherford (physicist, 1871–1937) disdained mere description, saying "All science is either physics or stamp collecting." Chomsky stands with him: "You can also collect butterflies and make many observations. If you like butterflies, that's fine; but such work must not be confounded with research, which is concerned to discover explanatory principles."

    Acknowledging both sides is Robert Millikan (physicist, 1868–1953) who said in his Nobel acceptance speech "Science walks forward on two feet, namely theory and experiment ... Sometimes it is one foot that is put forward first, sometimes the other, but continuous progress is only made by the use of both."

    The two cultures

    After all those distinguished scientists have weighed in, I think the most relevant contribution to the current discussion is the 2001 paper by Leo Breiman (statistician, 1928–2005), Statistical Modeling: The Two Cultures. In this paper Breiman, alluding to C.P. Snow, describes two cultures:

    First the data modeling culture (to which, Breiman estimates, 98% of statisticians subscribe) holds that nature can be described as a black box that has a relatively simple underlying model which maps from input variables to output variables (with perhaps some random noise thrown in). It is the job of the statistician to wisely choose an underlying model that reflects the reality of nature, and then use statistical data to estimate the parameters of the model.

    Second the algorithmic modeling culture (subscribed to by 2% of statisticians and many researchers in biology, artificial intelligence, and other fields that deal with complex phenomena), which holds that nature's black box cannot necessarily be described by a simple model. Complex algorithmic approaches (such as support vector machines or boosted decision trees or deep belief networks) are used to estimate the function that maps from input to output variables, but we have no expectation that the form of the function that emerges from this complex algorithm reflects the true underlying nature.

    It seems that the algorithmic modeling culture is what Chomsky is objecting to most vigorously. It is not just that the models are statistical (or probabilistic), it is that they produce a form that, while accurately modeling reality, is not easily interpretable by humans, and makes no claim to correspond to the generative process used by nature. In other words, algorithmic modeling describes what does happen, but it doesn't answer the question of why.

    Breiman's article explains his objections to the first culture, data modeling.

    Basically, the conclusions made by data modeling are about the model, not about nature. (Aside: I remember in 2000 hearing James Martin, the leader of the Viking missions to Mars, saying that his job as a spacecraft engineer was not to land on Mars, but to land on the model of Mars provided by the geologists.) The problem is, if the model does not emulate nature well, then the conclusions may be wrong. For example, linear regression is one of the most powerful tools in the statistician's toolbox. Therefore, many analyses start out with "Assume the data are generated by a linear model..." and lack sufficient analysis of what happens if the data are not in fact generated that way. In addition, for complex problems there are usually many alternative good models, each with very similar measures of goodness of fit. How is the data modeler to choose between them? Something has to give. Breiman is inviting us to give up on the idea that we can uniquely model the true underlying form of nature's function from inputs to outputs. Instead he asks us to be satisfied with a function that accounts for the observed data well, and generalizes to new, previously unseen data well, but may be expressed in a complex mathematical form that may bear no relation to the "true" function's form (if such a true function even exists). Chomsky takes the opposite approach: he prefers to keep a simple, elegant model, and give up on the idea that the model will represent the data well. Instead, he declares that what he calls performance data—what people actually do—is off limits to linguistics; what really matters is competence—what he imagines that they should do.

    In January of 2011, television personality Bill O'Reilly weighed in on more than one culture war with his statement "tide goes in, tide goes out. Never a miscommunication. You can't explain that," which he proposed as an argument for the existence of God. O'Reilly was ridiculed by his detractors for not knowing that tides can be readily explained by a system of partial differential equations describing the gravitational interaction of sun, earth, and moon (a fact that was first worked out by Laplace in 1776 and has been considerably refined since; when asked by Napoleon why the creator did not enter into his calculations, Laplace said "I had no need of that hypothesis."). (O'Reilly also seems not to know about Deimos and Phobos (two of my favorite moons in the entire solar system, along with Europa, Io, and Titan), nor that Mars and Venus orbit the sun, nor that the reason Venus has no moons is because it is so close to the sun that there is scant room for a stable lunar orbit.) But O'Reilly realizes that it doesn't matter what his detractors think of his astronomical ignorance, because his supporters think he has gotten exactly to the key issue: why? He doesn't care how the tides work, tell him why they work. Why is the moon at the right distance to provide a gentle tide, and exert a stabilizing effect on earth's axis of rotation, thus protecting life here? Why does gravity work the way it does? Why does anything at all exist rather than not exist? O'Reilly is correct that these questions can only be addressed by mythmaking, religion or philosophy, not by science.

    Chomsky has a philosophy based on the idea that we should focus on the deep whys and that mere explanations of reality don't matter. In this, Chomsky is in complete agreement with O'Reilly. (I recognize that the previous sentence would have an extremely low probability in a probabilistic model trained on a newspaper or TV corpus.) Chomsky believes a theory of language should be simple and understandable, like a linear regression model where we know the underlying process is a straight line, and all we have to do is estimate the slope and intercept.

    For example, consider the notion of a pro-drop language from Chomsky's Lectures on Government and Binding (1981). In English we say, for example, "I'm hungry," expressing the pronoun "I". But in Spanish, one expresses the same thought with "Tengo hambre" (literally "have hunger"), dropping the pronoun "Yo". Chomsky's theory is that there is a "pro-drop parameter" which is "true" in Spanish and "false" in English, and that once we discover the small set of parameters that describe all languages, and the values of those parameters for each language, we will have achieved true understanding.

    The problem is that reality is messier than this theory. Here are some dropped pronouns in English:
    • "Not gonna do it. Wouldn't be prudent." (Dana Carvey, impersonating George H. W. Bush)
    • "Thinks he can outsmart us, does he?" (Evelyn Waugh, The Loved One)
    • "Likes to fight, does he?" (S.M. Stirling, The Sunrise Lands)
    • "Thinks he's all that." (Kate Brian, Lucky T)
    • "Go for a walk?" (countless dog owners)
    • "Gotcha!" "Found it!" "Looks good to me!" (common expressions)

    Linguists can argue over the interpretation of these facts for hours on end, but the diversity of language seems to be much more complex than a single Boolean value for a pro-drop parameter. We shouldn't accept a theoretical framework that places a priority on making the model simple over making it accurately reflect reality.

    From the beginning, Chomsky has focused on the generative side of language.

    From this side, it is reasonable to tell a non-probabilistic story: I know definitively the idea I want to express—I'm starting from a single semantic form—thus all I have to do is choose the words to say it; why can't that be a deterministic, categorical process? If Chomsky had focused on the other side, interpretation, as Claude Shannon did, he may have changed his tune. In interpretation (such as speech recognition) the listener receives a noisy, ambiguous signal and needs to decide which of many possible intended messages is most likely. Thus, it is obvious that this is inherently a probabilistic problem, as was recognized early on by all researchers in speech recognition, and by scientists in other fields that do interpretation: the astronomer Laplace said in 1819 "Probability theory is nothing more than common sense reduced to calculation," and the physicist James Maxwell said in 1850 "The true logic for this world is the calculus of Probabilities, which takes account of the magnitude of the probability which is, or ought to be, in a reasonable man's mind."

    Finally, one more reason why Chomsky dislikes statistical models is that they tend to make linguistics an empirical science (a science about how people actually use language) rather than a mathematical science (an investigation of the mathematical properties of models of formal language). Chomsky prefers the later, as evidenced by his statement in Aspects of the Theory of Syntax (1965):

    Linguistic theory is mentalistic, since it is concerned with discovering a mental reality underlying actual behavior. Observed use of language ... may provide evidence ... but surely cannot constitute the subject-matter of linguistics, if this is to be a serious discipline.

    I can't imagine Laplace saying that observations of the planets cannot constitute the subject-matter of orbital mechanics, or Maxwell saying that observations of electrical charge cannot constitute the subject-matter of electromagnetism. It is true that physics considers idealizations that are abstractions from the messy real world. For example, a class of mechanics problems ignores friction. But that doesn't mean that friction is not considered part of the subject-matter of physics.

    So how could Chomsky say that observations of language cannot be the subject-matter of linguistics? It seems to come from his viewpoint as a Platonist and a Rationalist and perhaps a bit of a Mystic. As in Plato's allegory of the cave, Chomsky thinks we should focus on the ideal, abstract forms that underlie language, not on the superficial manifestations of language that happen to be perceivable in the real world. That is why he is not interested in language performance. But Chomsky, like Plato, has to answer where these ideal forms come from. Chomsky (1991) shows that he is happy with a Mystical answer, although he shifts vocabulary from "soul" to "biological endowment."

    Plato's answer was that the knowledge is 'remembered' from an earlier existence. The answer calls for a mechanism: perhaps the immortal soul ... rephrasing Plato's answer in terms more congenial to us today, we will say that the basic properties of cognitive systems are innate to the mind, part of human biological endowment.

    It was reasonable for Plato to think that the ideal of, say, a horse, was more important than any individual horse we can perceive in the world. In 400BC, species were thought to be eternal and unchanging. We now know that is not true; that the horses on another cave wall—in Lascaux—are now extinct, and that current horses continue to evolve slowly over time. Thus there is no such thing as a single ideal eternal "horse" form.

    We also now know that language is like that as well: languages are complex, random, contingent biological processes that are subject to the whims of evolution and cultural change. What constitutes a language is not an eternal ideal form, represented by the settings of a small number of parameters, but rather is the contingent outcome of complex processes. Since they are contingent, it seems they can only be analyzed with probabilistic models. Since people have to continually understand the uncertain. ambiguous, noisy speech of others, it seems they must be using something like probabilistic reasoning.

    Chomsky for some reason wants to avoid this, and therefore he must declare the actual facts of language use out of bounds and declare that true linguistics only exists in the mathematical realm, where he can impose the formalism he wants. Then, to get language from this abstract, eternal, mathematical realm into the heads of people, he must fabricate a mystical facility that is exactly tuned to the eternal realm. This may be very interesting from a mathematical point of view, but it misses the point about what language is, and how it works.

    Thanks

    Thanks to Ann Farmer, Fernando Pereira, Dan Jurafsky, Hal Varian, and others for comments and suggestions on this essay.

    Annotated Bibliography
    1 Abney, Steve (1996) Statistical Methods and Linguistics, in Klavans and Resnik (eds.) The Balancing Act: Combining Symbolic and Statistical Approaches to Language, MIT Press. An excellent overall introduction to the statistical approach to language processing, and covers some ground that is not addressed often, such as language change and individual differences.
    2 Breiman, Leo (2001) Statistical Modeling: The Two Cultures, Statistical Science, Vol. 16, No. 3, 199-231. Breiman does a great job of describing the two approaches, explaining the benefits of his approach, and defending his points in the vary interesting commentary with eminent statisticians: Cox, Efron, Hoadley, and Parzen.
    3 Chomsky, Noam (1956) Three Models for the Description of Language, IRE Transactions on Information theory (2), pp. 113-124. Compares finite state, phrase structure, and transformational grammars. Introduces "colorless green ideas sleep furiously."
    4 Chomsky, Noam (1967) Syntactic Structures, Mouton. A book-length exposition of Chomsky's theory that was the leading exposition of linguistics for a decade. Claims that probabilistic models give no insight into syntax.
    5 Chomsky, Noam (1969) Some Empirical Assumptions in Modern Philosophy of Language, in Philosophy, Science and Method: Essays in Honor or Ernest Nagel, St. Martin's Press. Claims that the notion "probability of a sentence" is an entirely useless notion.
    6 Chomsky, Noam (1981) Lectures on government and binding, de Gruyer. A revision of Chomsky's theory; this version introduces Universal Grammar. We cite it for the coverage of parameters such as pro-drop.
    7 Chomsky, Noam (1991) Linguistics and adjacent fields: a personal view, in Kasher (ed.), A Chomskyan Turn, Oxford. I found the Plato quotes in this article, published by the Communist Party of Great Britain, and apparently published by someone with no linguistics training whatsoever, but with a political agenda.
    8 Gold, E. M. (1967) Language Identification in the Limit, Information and Control, Vol. 10, No. 5, pp. 447-474. Gold proved a result in formal language theory that we can state (with some artistic license) as this: imagine a game between two players, guesser and chooser. Chooser says to guesser, "Here is an infinite number of languages. I'm going to choose one of them, and start reading sentences to you that come from that language. On your N-th birthday there will be a True-False quiz where I give you 100 sentences you haven't heard yet, and you have to say whether they come from the language or not." There are some limits on what the infinite set looks like and on how the chooser can pick sentences (he can be deliberately tricky, but he can't just repeat the same sentence over and over, for example). Gold's result is that if the infinite set of languages are all generated by context-free grammars then there is no strategy for guesser that guarantees she gets 100% correct every time, no matter what N you choose for the birthday. This result was taken by Chomsky and others to mean that it is impossible for children to learn human languages without having an innate "language organ." As Johnson (2004) and others show, this was an invalid conclusion; the task of getting 100% on the quiz (which Gold called language identification) really has nothing in common with the task of language acquisition performed by children, so Gold's Theorem has no relevance.
    9 Horning, J. J. (1969) A study of grammatical inference, Ph.D. thesis, Stanford Univ. Where Gold found a negative result—that context-free languages were not identifiable from examples, Horning found a positive result—that probabilistic context-free languages are identifiable (to within an arbitrarily small level of error). Nobody doubts that humans have unique innate capabilities for understanding language (although it is unknown to what extent these capabilities are specific to language and to what extent they are general cognitive abilities related to sequencing and forming abstractions). But Horning proved in 1969 that Gold cannot be used as a convincing argument for an innate language organ that specifies all of language except for the setting of a few parameters.
    10 Johnson, Kent (2004) Gold's Theorem and cognitive science, Philosophy of Science, Vol. 71, pp. 571-592. The best article I've seen on what Gold's Theorem actually says and what has been claimed about it (correctly and incorrectly). Concludes that Gold has something to say about formal languages, but nothing about child language acquisition.
    11 Lappin, Shalom and Shieber, Stuart M. (2007) Machine learning theory and practice as a source of insight into universal grammar., Journal of Linguistics, Vol. 43, No. 2, pp. 393-427. An excellent article discussing the poverty of the stimulus, the fact that all models have bias, the difference between supervised and unsupervised learning, and modern (PAC or VC) learning theory. It provides alternatives to the model of Universal Grammar consisting of a fixed set of binary parameters.
    12 Manning, Christopher (2002) Probabilistic Syntax, in Bod, Hay, and Jannedy (eds.), Probabilistic Linguistics, MIT Press. A compelling introduction to probabilistic syntax, and how it is a better model for linguistic facts than categorical syntax. Covers "the joys and perils of corpus linguistics."
    13 Norvig, Peter (2007) How to Write a Spelling Corrector, unpublished web page. Shows working code to implement a probabilistic, statistical spelling correction algorithm.
    14 Norvig, Peter (2009) Natural Language Corpus Data, in Seagran and Hammerbacher (eds.), Beautiful Data, O'Reilly. Expands on the essay above; shows how to implement three tasks: text segmentation, cryptographic decoding, and spelling correction (in a slightly more complete form than the previous essay).
    15 Pereira, Fernando (2002) Formal grammar and information theory: together again?, in Nevin and Johnson (eds.), The Legacy of Zellig Harris, Benjamins. When I set out to write the page you are reading now, I was concentrating on the events that took place in Cambridge, Mass., 4800 km from home. After doing some research I was surprised to learn that the authors of two of the three best articles on this subject sit within a total of 10 meters from my desk: Fernando Pereira and Chris Manning. (The third, Steve Abney, sits 3700 km away.) But perhaps I shouldn't have been surprised. I remember giving a talk at ACL on the corpus-based language models used at Google, and having Fernando, then a professor at U. Penn., comment "I feel like I'm a particle physicist and you've got the only super-collider." A few years later he moved to Google. Fernando is also famous for his quote "The older I get, the further down the Chomsky Hierarchy I go." His article here covers some of the same ground as mine, but he goes farther in explaining the range of probabilistic models available and how they are useful.
    16 Plato (c. 380BC) The Republic. Cited here for the allegory of the cave.
    17 Shannon, C.E. (1948) A Mathematical Theory of Communication, The Bell System Technical Journal, Vol. 27, pp. 379-423. An enormously influential article that started the field of information theory and introduced the term "bit" and the noisy channel model, demonstrated successive n-gram approximations of English, described Markov models of language, defined entropy with respect to these models, and enabled the growth of the telecommunications industry.


    Peter Norvig

  • Boulder Dash 11th Mar 2019

    Artificial Intelligence--A Personal View
    D. Marr

    The Artificial IntelligenceLaboratory, Massachusetts Institute of Technology, 545,TechnologySquare,Cambridge,MA 02139,U.S.A.


    http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.111.5076&rep=rep1&type=pdf

    “Finally, I would like to draw one more distinction *.hat :eems to be important when choosing a research problem, or when judging the value of completed work. The problem is that studies--particularly of natural language understanding, problem-solving, or the structure of memory--can easily degenerate into the writing of programs that do no more than mimic in an unenlightening way some small aspect of human performance. Wcizenbaum [30] now judges his program Eliza to belong to this category, and I have never seen any reason to disagree. More controversially, I would also criticize on the same grounds Newell and Simon's work on production systems, and some of Norman and Rummelhart's [16] work on long term memory.

    The reason is this. If one believes that the aim of information-processing studies is to formulate and understand particular information-processing problems, then it is the structure of those problems that is ceptral, not the mechanisms througl~ which they are implemented. Therefore, the first thing to do is to find problems that we can solve well, find out how to solve them, and examine o~arperformance in the light of that understanding. The most fruitful source of such problems is operations that we perform well, fluently (and hence unconsciously) since it is difficult to see how reliability could be achieved if there were no sound underlying method. On the other hand, problem-solving research has tended to concentrate on problems that we understand well intellectually but perform poorly on, like mental arithmetic and criptarithmatic or on problems likeg~ometrytheorem-proving, or games like chess, in which human skills seem to rest on a huge base of knowledge and expertise. I argue that these are exceptionally good grounds for not studying how we carry out such tasks yet. I have no doubt that when we do mental arithmetic we arc doing something well, but it is not arithmetic, and we seem far from under- standing even one component of what that something is. Let us therefore con- centrate on the simpler problems first, for there we have some hope of genuine
    advancement.

    If one ignores this stricture, one is left in the end with unlikely looking mechanisms who so only recommendation is that they cannot do something we cannot do. Production systems seem to me to fit this description quite well. Even taken on their own terms as mechanisms, they leave a lot to be desired.

    As a programming language they are poorly designed, and hard to use, and I cannot believe that the human brain could possibly be burdened with such poor implementation decisions at so basic a level.

    A parallel may perhaps be drawn between production systems for students of problem-solving, and Fourier analysis for visual neurophysiologists. Simple operations on a spatial frequency representation of an image can mimic several int0rosting visual phenomena that seem to be exhibited by our visual systems. Tkcso include the detection of repetition, certain visual illusions, the notion of separate linearly adding channels, separation of overall shape from fine local detail, and a simple expression of size invariance. The reason why the spatial frequency domain is ignored by image analysts is that it is virtually useless for the main job of vision--building up a description of what is there from the intensity array. The intuition that visual physiologists lack, and which is so important, is for how this may be done. A production system exhibits several interesting ideas-- the absence of explicit subroutine calls, a blackboard communication channel, and some notion of a short term memory. But just because production systems display these side-effects (as Fourier analysis "displays" some visual illusions) does not mean that they have anything to do with what is really going on. My own guess would be, for example, that the fact that short-term memory can act as a storage register is probably the least important of its functions. I expect that there are several "intellectual reflexes" that operate on items held there, about which nothing is yet known, and which will eventually be held to be the crucial things about it because they perform central functions like opening up an item's reference window. Studying our performance in close relation to production systems seems to me a waste of time, because it amounts to studying a mechanism not a problem, and can therefore lead to no Type 1 results. The mechanisms that such research is trying to penetrate will be unravelled by studying problems, just as vision research is progressing because it is the problemof vision that is being attacked, not neural visual mechanisms.

    A reflexion of the same criticism can be made of Norman and Rummelhart's work, where they studied the way information seems to be organized in long term memory. Again, the danger is that questions are not asked in relation to a clear information-processing problem. Instead, they are asked and answers proposed in terms of a mechanism--in this case, it is called an "active structural network" and it is so simple and general as to be devoid of theoretical substance.

    They may be able to say that such and such an "association" seems to exist, but they cannot say of what the association consists, nor that it has to be so because to solve problem X (which we can solve) you needa memory organized in such-and- such a way; and that if one has it, certain apparent "associations" occur as side- effects. Experimental psychology can do a valuable job in discovering facts that need explaining, including those about long-term memory, and the work of Shepard [23], Rosch [20] and of Warrington [28] (for example) seems to me very successful at this; but like experimental neurophysiology, experimental psychology will not be able to explain those facts unless information-processing research has identified and solved the appropriate problems X.5 It seems to me that finding such problems X, and solving them, is what A.I. should be trying to do.”

  • Boulder Dash 11th Mar 2019

    Pinker on AI. The incoherence of the projected fears.

    But at about 20.30 he talks of the horrible stultifying brain numbing jobs that automation will take over! Yeah, exactly. But in the meantime let’s leave the real people doing them now there, for shit pay and conditions until the capitalists can replace them more cheaply with machines. A good thing...but let’s wait! How long for? Why? Because people like Pinker think and say that it has been proven capitalism is more humane than communism. Communism? Well, twentieth century socialism as in all the usual suspects and accompanying gulags. He certainly doesn’t mean a new system, structured in a way that actually fosters better relations based on the maxim, from each according to ability to each according to need. That would be something like a Parecon for instance. And he certainly doesn’t advocate for a new economic system now or as soon as possible. No, let’s leave the capitalist system in place, a system that created the shit jobs for shit pay and had real people replaced by machines giving rise to the need for a basic income in the first place . But of course Pinker alludes to a basic income. He understands that when these people lose their shitty jobs they will require some sort or of ‘decent’ income. Decent? Just not as decent as those who run the workplaces the robots have taken over perhaps? Or like Elon Musk? Oh, that’s right, he’s only on about 30,000 a year and doesn’t even own a home...effectively homeless. Maybe a basic income of 30,000 a year? More? Ok.

    https://youtu.be/epQxfSp-rdU

  • Boulder Dash 13th Mar 2019

    Neural networks by Chris Woodford. Last updated: March 14, 2018.

    Which is better—computer or brain? Ask most people if they want a brain like a computer and they'd probably jump at the chance. But look at the kind of work scientists have been doing over the last couple of decades and you'll find many of them have been trying hard to make their computers more like brains! How? With the help of neural networks—computer programs assembled from hundreds, thousands, or millions of artificial brain cells that learn and behave in a remarkably similar way to human brains. What exactly are neural networks? How do they work? Let's take a closer look!

    Photo: Computers and brains have much in common, but they're essentially very different. What happens if you combine the best of both worlds—the systematic power of a computer and the densely interconnected cells of a brain? You get a superbly useful neural network.

    How brains differ from computers

    You often hear people comparing the human brain and the electronic computer and, on the face of it, they do have things in common. A typical brain contains something like 100 billion miniscule cells called neurons (no-one knows exactly how many there are and estimates go from about 50 billion to as many as 500 billion). Each neuron is made up of a cell body (the central mass of the cell) with a number of connections coming off it: numerous dendrites (the cell's inputs—carrying information toward the cell body) and a single axon (the cell's output—carrying information away). Neurons are so tiny that you could pack about 100 of their cell bodies into a single millimeter. (It's also worth noting, briefly in passing, that neurons make up only 10 percent of all the cells in the brain; the rest are glial cells, also called neuroglia, that support and protect the neurons and feed them with energy that allows them to work and grow.) Inside a computer, the equivalent to a brain cell is a nanoscopically tiny switching device called a transistor. The latest, cutting-edge microprocessors (single-chip computers) contain over 2 billion transistors; even a basic microprocessor has about 50 million transistors, all packed onto an integrated circuit just 25mm square (smaller than a postage stamp)!

    Artwork: A neuron: the basic structure of a brain cell, showing the central cell body, the dendrites (leading into the cell body), and the axon (leading away from it).

    That's where the comparison between computers and brains begins and ends, because the two things are completely different. It's not just that computers are cold metal boxes stuffed full of binary numbers, while brains are warm, living, things packed with thoughts, feelings, and memories. The real difference is that computers and brains "think" in completely different ways. The transistors in a computer are wired in relatively simple, serial chains (each one is connected to maybe two or three others in basic arrangements known as logic gates), whereas the neurons in a brain are densely interconnected in complex, parallel ways (each one is connected to perhaps 10,000 of its neighbors).

    This essential structural difference between computers (with maybe a few hundred million transistors connected in a relatively simple way) and brains (perhaps 10–100 times more brain cells connected in richer and more complex ways) is what makes them "think" so very differently. Computers are perfectly designed for storing vast amounts of meaningless (to them) information and rearranging it in any number of ways according to precise instructions (programs) we feed into them in advance. Brains, on the other hand, learn slowly, by a more roundabout method, often taking months or years to make complete sense of something really complex. But, unlike computers, they can spontaneously put information together in astounding new ways—that's where the human creativity of a Beethoven or a Shakespeare comes from—recognizing original patterns, forging connections, and seeing the things they've learned in a completely different light.

    Wouldn't it be great if computers were more like brains? That's where neural networks come in!

    Photo: Electronic brain? Not quite. Computer chips are made from thousands, millions, and sometimes even billions of tiny electronic switches called transistors. That sounds like a lot, but there are still far fewer of them than there are cells in the human brain.

    What is a neural network?

    The basic idea behind a neural network is to simulate (copy in a simplified but reasonably faithful way) lots of densely interconnected brain cells inside a computer so you can get it to learn things, recognize patterns, and make decisions in a humanlike way. The amazing thing about a neural network is that you don't have to program it to learn explicitly: it learns all by itself, just like a brain!

    But it isn't a brain. It's important to note that neural networks are (generally) software simulations: they're made by programming very ordinary computers, working in a very traditional fashion with their ordinary transistors and serially connected logic gates, to behave as though they're built from billions of highly interconnected brain cells working in parallel. No-one has yet attempted to build a computer by wiring up transistors in a densely parallel structure exactly like the human brain. In other words, a neural network differs from a human brain in exactly the same way that a computer model of the weather differs from real clouds, snowflakes, or sunshine. Computer simulations are just collections of algebraic variables and mathematical equations linking them together (in other words, numbers stored in boxes whose values are constantly changing). They mean nothing whatsoever to the computers they run inside—only to the people who program them.

    Real and artificial neural neworks

    Before we go any further, it's also worth noting some jargon. Strictly speaking, neural networks produced this way are called artificial neural networks (or ANNs) to differentiate them from the real neural networks (collections of interconnected brain cells) we find inside our brains. You might also see neural networks referred to by names like connectionist machines (the field is also called connectionism), parallel distributed processors (PDP), thinking machines, and so on—but in this article we're going to use the term "neural network" throughout and always use it to mean "artificial neural network."

    What does a neural network consist of?

    A typical neural network has anything from a few dozen to hundreds, thousands, or even millions of artificial neurons called units arranged in a series of layers, each of which connects to the layers on either side. Some of them, known as input units, are designed to receive various forms of information from the outside world that the network will attempt to learn about, recognize, or otherwise process. Other units sit on the opposite side of the network and signal how it responds to the information it's learned; those are known as output units. In between the input units and output units are one or more layers of hidden units, which, together, form the majority of the artificial brain. Most neural networks are fully connected, which means each hidden unit and each output unit is connected to every unit in the layers either side. The connections between one unit and another are represented by a number called a weight, which can be either positive (if one unit excites another) or negative (if one unit suppresses or inhibits another). The higher the weight, the more influence one unit has on another. (This corresponds to the way actual brain cells trigger one another across tiny gaps called synapses.)

    Photo: A fully connected neural network is made up of input units (red), hidden units (blue), and output units (yellow), with all the units connected to all the units in the layers either side. Inputs are fed in from the left, activate the hidden units in the middle, and make outputs feed out from the right. The strength (weight) of the connection between any two units is gradually adjusted as the network learns.

    How does a neural network learn things?

    Information flows through a neural network in two ways. When it's learning (being trained) or operating normally (after being trained), patterns of information are fed into the network via the input units, which trigger the layers of hidden units, and these in turn arrive at the output units. This common design is called a feedforward network. Not all units "fire" all the time. Each unit receives inputs from the units to its left, and the inputs are multiplied by the weights of the connections they travel along. Every unit adds up all the inputs it receives in this way and (in the simplest type of network) if the sum is more than a certain threshold value, the unit "fires" and triggers the units it's connected to (those on its right).

    For a neural network to learn, there has to be an element of feedback involved—just as children learn by being told what they're doing right or wrong. In fact, we all use feedback, all the time. Think back to when you first learned to play a game like ten-pin bowling. As you picked up the heavy ball and rolled it down the alley, your brain watched how quickly the ball moved and the line it followed, and noted how close you came to knocking down the skittles. Next time it was your turn, you remembered what you'd done wrong before, modified your movements accordingly, and hopefully threw the ball a bit better. So you used feedback to compare the outcome you wanted with what actually happened, figured out the difference between the two, and used that to change what you did next time ("I need to throw it harder," "I need to roll slightly more to the left," "I need to let go later," and so on). The bigger the difference between the intended and actual outcome, the more radically you would have altered your moves.

    Photo:Bowling: You learn how to do skillful things like this with the help of the neural network inside your brain. Every time you throw the ball wrong, you learn what corrections you need to make next time.

    Neural networks learn things in exactly the same way, typically by a feedback process called backpropagation (sometimes abbreviated as "backprop"). This involves comparing the output a network produces with the output it was meant to produce, and using the difference between them to modify the weights of the connections between the units in the network, working from the output units through the hidden units to the input units—going backward, in other words. In time, backpropagation causes the network to learn, reducing the difference between actual and intended output to the point where the two exactly coincide, so the network figures things out exactly as it should.

    How does it work in practice?

    Once the network has been trained with enough learning examples, it reaches a point where you can present it with an entirely new set of inputs it's never seen before and see how it responds. For example, suppose you've been teaching a network by showing it lots of pictures of chairs and tables, represented in some appropriate way it can understand, and telling it whether each one is a chair or a table. After showing it, let's say, 25 different chairs and 25 different tables, you feed it a picture of some new design it's not encountered before—let's say a chaise longue—and see what happens. Depending on how you've trained it, it'll attempt to categorize the new example as either a chair or a table, generalizing on the basis of its past experience—just like a human. Hey presto, you've taught a computer how to recognize furniture!

    That doesn't mean to say a neural network can just "look" at pieces of furniture and instantly respond to them in meaningful ways; it's not behaving like a person. Consider the example we've just given: the network is not actually looking at pieces of furniture. The inputs to a network are essentially binary numbers: each input unit is either switched on or switched off. So if you had five input units, you could feed in information about five different characteristics of different chairs using binary (yes/no) answers. The questions might be 1) Does it have a back? 2) Does it have a top? 3) Does it have soft upholstery? 4) Can you sit on it comfortably for long periods of time? 5) Can you put lots of things on top of it? A typical chair would then present as Yes, No, Yes, Yes, No or 10110 in binary, while a typical table might be No, Yes, No, No, Yes or 01001. So, during the learning phase, the network is simply looking at lots of numbers like 10110 and 01001 and learning that some mean chair (which might be an output of 1) while others mean table (an output of 0).

    What are neural networks used for?

    On the basis of this example, you can probably see lots of different applications for neural networks that involve recognizing patterns and making simple decisions about them. In airplanes, you might use a neural network as a basic autopilot, with input units reading signals from the various cockpit instruments and output units modifying the plane's controls appropriately to keep it safely on course. Inside a factory, you could use a neural network for quality control. Let's say you're producing clothes washing detergent in some giant, convoluted chemical process. You could measure the final detergent in various ways (its color, acidity, thickness, or whatever), feed those measurements into your neural network as inputs, and then have the network decide whether to accept or reject the batch.

    There are lots of applications for neural networks in security, too. Suppose you're running a bank with many thousands of credit-card transactions passing through your computer system every single minute. You need a quick automated way of identifying any transactions that might be fraudulent—and that's something for which a neural network is perfectly suited. Your inputs would be things like 1) Is the cardholder actually present? 2) Has a valid PIN number been used? 3) Have five or more transactions been presented with this card in the last 10 minutes? 4) Is the card being used in a different country from which it's registered? —and so on. With enough clues, a neural network can flag up any transactions that look suspicious, allowing a human operator to investigate them more closely. In a very similar way, a bank could use a neural network to help it decide whether to give loans to people on the basis of their past credit history, current earnings, and employment record.

    Photo:Handwriting recognition on a touchscreen, tablet computer is one of many applications perfectly suited to a neural network. Each character (letter, number, or symbol) that you write is recognized on the basis of key features it contains (vertical lines, horizontal lines, angled lines, curves, and so on) and the order in which you draw them on the screen. Neural networks get better and better at recognizing over time.

    Many of the things we all do everyday involve recognizing patterns and using them to make decisions, so neural networks can help us out in zillions of different ways. They can help us forecast the stockmarket or the weather, operate radar scanning systems that automatically identify enemy aircraft or ships, and even help doctors to diagnose complex diseases on the basis of their symptoms. There might be neural networks ticking away inside your computer or your cellphone right this minute. If you use cellphone apps that recognize your handwriting on a touchscreen, they might be using a simple neural network to figure out which characters you're writing by looking out for distinct features in the marks you make with your fingers (and the order in which you make them). Some kinds of voice recognition software also use neural networks. And so do some of the email programs that automatically differentiate between genuine emails and spam. Neural networks have even proved effective in translating text from one language to another. Google's automatic translation, for example, has made increasing use of this technology over the last few years to convert words in one language (the network's input) into the equivalent words in another language (the network's output). In 2016, Google announced it was using something it called Neural Machine Translation (NMT) to convert entire sentences, instantly, with a 55–85 percent reduction in errors.

    All in all, neural networks have made computer systems more useful by making them more human. So next time you think you might like your brain to be as reliable as a computer, think again—and be grateful you have such a superb neural network already installed in your head!

  • Boulder Dash 13th Mar 2019

    “YOSHUA BENGIO: Of course, they’re way off sometimes! They’re not trained on enough data, and there are also some fundamental advances in basic research that need to be made for those systems to really understand an image and really understand language. We’re far away from achieving those advances, but the fact that they were able to reach the level of performance that they have was not something we expected.” (Martin Ford; Architects of Intelligence: The Truth Anout The People Who Build It.)

    BRUTE FORCE!

    If it don’t fit...force it!

    Stuff it in there.

  • Boulder Dash 13th Mar 2019

    “But, unlike computers, they can spontaneously put information together in astounding new ways—that's where the human creativity of a Beethoven or a Shakespeare comes from—recognizing original patterns, forging connections, and seeing the things they've learned in a completely different light.”

    The creativity of Shakespeare and Beethoven is the same creativity shown by someone gardening, or figuring out some basic shit to get some other basic shit sorted. Creativity is the basic building block of life. But when you stuff, force, through institutional structures, 80% of the worlds population into boxes they have little chance of climbing out of without dying or becoming even more injured, mentally and physically, than they already are, then of course the only creativity that ever gets observed, recognised, paid attention to, is that small percentage of humans not so confined to such boxes and beliefs are forged that real creativity, worthy creativity is the domain of a few and not the many. Hence why the same fucking names keep cropping up when this word, creativity, is brought up.

    We all know this to be intuitively true.

    It’s intersting that many of AI’s researchers are tackling incredibly difficult if not impossible tasks trying to recreate intelligence in machines. Really really really hard things. But when it comes to reorganising or trying to figure out better ways of organising production, consumption and allocation, apparently that task is way beyond everyone’s ken...fucking wankers.

    If as many minds turned toward inventing some future thinking machine or AGI were turned toward inventing a better way of organising an economy, of production, consumption and allocation, in a more equitable, fair and just way, rather than assuming a lot of the time, if not all of the time, for selfish reasons, that the present system is the best of course of all the worst, as that racist Churchill said about democracy, then perhaps many of the fucked up problems often envisioned for the future surrounding new technologies just wouldn’t arise.

    Perhaps. But what would an ordinary dick like myself know?

  • Boulder Dash 13th Mar 2019

    https://youtu.be/7ROelYvo8f0

  • Boulder Dash 13th Mar 2019

    The goal is to develop an artificial intelligence that can create the kinds of posts Irie, oops, Mycroft Holmes IV does.

    Or, the type that can remain silent.

    Or one that when presented with an image captions it, “I’m not sure what that is.” The pronoun being particularly significant.

    Or, “Thays a really crap photo. Who took it?”

    Or, as I said before, one that talks to itself. Out loud would be cool.

    Is Irie, oops, Mycroft Holmes IV a robot?

    Is Alex or Dave?

    Are all those remaining silent?

    If Sam ‘I’m really the worst kind of compatibilist, one who doesn’t admit to it’ Harris says we do not have free will, and it is obvious, then perhaps we are already robots trying to invent a robot version 2. And it’s inevitable (no Daniel Dennett elbow room here!) in the sense that ‘we’ have no choice in the matter...it’s in the billiard balls banging into each other. Laplace’s demon sees it but ‘we’ cannot because ‘we’ are not angels.

    Could AI invent this kind of shit?

    That’s the real question. Can AI create all the kinds of ordinary and crazy shit humans can and not just all the ‘correct’ stuff. How do you program fun, stupidity, nuance, subtlety, idiocy, ugliness and not just for money...perhaps if you give all the robots all the shit work you may just get it!

  • Boulder Dash 13th Mar 2019

    “MARTIN FORD: Since you mention it, let’s talk more about AI and the economy, and some of the risks there. I have written a lot about the potential for artificial intelligence to bring on a new Industrial Revolution, and potentially to lead to a lot of job losses. How do you feel about that hypothesis, do you think that it is overhyped?

    But YOSHUA BENGIO: No, I don’t think it’s overhyped. The part that is less clear is whether this is going to happen over a decade or three decades. What I can say is that even if we stop basic research in AI and deep learning tomorrow, the science has advanced enough that there’s already a huge amount of social and economic benefit to reap from it simply by engineering new services and new products from these ideas.

    PpWe also collect a huge amount of data that we don’t use. For example, in healthcare, we’re only using a tiny, tiny fraction of what is available, or of what will be available as even more gets digitized every day. Hardware companies are working hard to build deep learning chips that are soon going to be easily a thousand times faster or more energy-efficient than the ones we currently have. The fact that you could have these things everywhere around you, in cars and phones, is clearly going to change the world.

    What will slow things down are things like social factors. It takes time to change the healthcare infrastructure, even if the technology is there. Society can’t change infinitely fast, even if the technology is moving forward.

    MARTIN FORD: If this technology change does lead to a lot of jobs being eliminated, do you think something like a basic income would be a good solution?

    YOSHUA BENGIO: I think a basic income could work, but we have to take a scientific view on this to get rid of our moral priors that say if a person doesn’t work, then they shouldn’t have an income. I think it’s crazy. I think we have to look at what’s going to work best for the economy and what’s going to work best for people’s happiness, and we can do pilot experiments to answer those questions.

    It’s not like there’s one clear answer, there are many ways that society could take care of the people who are going to be left behind and minimize the amount of misery arising from this Industrial Revolution. I’m going to go back to something that my friend Yann LeCun said: If we had had the foresight in the 19th century to see how the Industrial Revolution would unfold, maybe we could have avoided much of the misery that followed. If in the 19th century we had put in place the kind of social safety net that currently exists in most Western nations, instead of waiting until the 1940s and 1950s, then hundreds of millions of people would have led a much better and healthier life. The thing is, it’s going to take probably much less than a century this time to unfold that story, and so the potential negative impacts could be even larger.

    I think it’s really important to start thinking about it right now and to start scientifically studying the options to minimize misery and optimize global well-being. I think it’s possible to do it, and we shouldn’t just rely on our old biases and religious beliefs in order to decide on the answer to these questions.

    MARTIN FORD: I agree, but as you say, it could unfold fairly rapidly. It’s going to be a staggering political problem, too.

    YOSHUA BENGIO: Which is all the more reason to act quickly!” (Architects of Intelligence:The Truth About AI and the People Who build It.)

  • Boulder Dash 13th Mar 2019

    “MARTIN FORD: Where do you think this discussion should be taking place now? Is it something primarily think tanks and universities should do, or do you think this should be part of the political discussion both nationally and internationally?

    YOSHUA BENGIO: It should totally be part of the political discussion. I was invited to speak at a meeting of G7 ministers, and one of the questions discussed was, “How do we develop AI in a way that’s both economically positive and keeps the trust of the people?”, because people today do have concerns. The answer is to not do things in secret or in ivory towers, but instead to have an open discussion where everybody around the table, including every citizen, should be part of the discussion. [ how does that get done and when has it ever been the case?] We’re going to have to make collective choices about what kind of future we want, and because AI is so powerful, every citizen should understand at some level what the issues are.

    YOSHUA BENGIO is Full Professor of the Department of Computer Science and Operations Research, scientific director of the Montreal Institute for Learning Algorithms (Mila), CIFAR Program co-director of the CIFAR program on Learning in Machines and Brains, Canada Research Chair in Statistical Learning Algorithms. Together with Ian Goodfellow and Aaron Courville, he wrote Deep Learning, one of the defining textbooks on the subject. The book is available for free from https:// http://www.deeplearningbook.org.”

  • Boulder Dash 13th Mar 2019

    “STUART J. RUSSELL: Let me give you, shall we say, the standard definition of artificial intelligence, which is similar to the one in the book and is now quite widely accepted: An entity is intelligent to the extent that it does the right thing, meaning that its actions are expected to achieve its objectives. The definition applies to both humans and machines.”

    Spose that’s coherent.

    But what if the system is not aware of its objective. Was AlphaGo aware of its objective to win at Go? We knew what it’s objective was, what we wanted it to do but did it? So is it intelligent by that definition?

    Just asking.

  • Boulder Dash 14th Mar 2019

    learning
    /ˈləːnɪŋ/Submit
    noun

    the acquisition of knowledge or skills through study, experience, or being taught.
    "these children experienced difficulties in learning"

    synonyms: study, studying, education, schooling, tuition, teaching, academic work, instruction, training; More

    knowledge acquired through study, experience, [backpropagation] or being taught .

    [maybe they need to add here, backpropagation, because AlphaGo apparently, according to the guy above, Stu, really did learn, even though it is a safe bet it didn’t “know” what it was doing. But if it doesn’t “know” what it is doing then what “knowledge” is it acquiring in order for the word “learning” to mean anything? I’m not even certain AlphaGo used backpropagation, I’d need more data inputted into me head (brute force) to be sure, for me to learn.]

    "I liked to parade my learning in front of my sisters"

    synonyms: scholarship, knowledge, education, erudition, culture, intellect, academic attainment, acquirements, enlightenment, illumination, edification, book learning, insight, information, understanding, sageness, wisdom, sophistication;

  • Irie Zen 14th Mar 2019

    Quoting Dave; "You been researching my man, as usual, finding what's relevant." Again.. and again.. much appreciated; Again and again. Quite the roundhouse kick; Yeah; pretty raw.. but awesome. A shit load of scienceologic-/ & tyffikski basics and frameworks to get into.. it*. Great stuff.


    I dig your 'Noam Actually Saying' set. I love Noam, noamsayn? His shit is always dope.. and dense as funk.


    I called for a sum of 42% chomsky-esque parts [here] before; now we parade 'our' research in front of our sisters with.. ehh.. this ~88.88% (incomprehensible*) Intellectual-1337-Expert-Level-ParBaff®/ParGobb® shizzle. Let's condense the most significant facts from the vapors of above nuances. FFF! I'm working on it.


    "..anyone who cannot at least use the terminology persuasively(!) risks being mistaken for kitchen help at the [..] banquet."


     


    Colorless green ideas sleep furiously,
    while the moon starts hatching.


     

  • Boulder Dash 14th Mar 2019

    It’s easy to straight to Wikipedia, but what the hell. Perhaps it’s easier to mimic or simulate “intelligence” and be done with it.

    https://en.wikipedia.org/wiki/Philosophy_of_artificial_intelligence

  • Boulder Dash 14th Mar 2019

    In the book Thinking Forward, Albert goes through the process of how one may develop a new and better economy...pretty much how he and Hahnel came up with Parecon. Albert has a love for physics or the sciences and Inreckon there was a bit of that kind of formality and method behind the construction of Parecon.

    One sees the amount of effort and of course money, with academics leaving for industry in their droves, involved in the development of AI and other technologies. A far more sexy kind of endeavour and kind of, in a sense, apolitical. One cannot really tell what the political proclivities are of these researchers. But one is pretty confident that such dispositions do not tend too far to the portside. Yet “vision” for the future is always part of the show.

    While Noam tends to keep his intellectual pursuits separate from his political activism, I am sure he would agree that the institutional structure of really existing capitalism has a profound effect on the shape of technological progress, in particular what kind of things are pursued. Given the force of these institutional structures, be they economic or political, both as entangled as electrons at the quantum level, it is no hard task to assume that whatever gets done or pursued in the field of AI it will have little to do with the needs and desires of the unworthy. Rather, the trajectory of technological advance and AI will without a doubt be that chosen only by the worthy, by those who truly matter, through the usual processes out of sight of the bewildered herd and obviously, in the minds of the truly worthy, beyond their collective ken. This is the real definition of democracy...democracy with any risk removed.

    It is astounding that so many smart thinkers, great minds, can get together, give talks, discuss, debate, write huge numbers of papers and books and earn very very reasonable salaries along the way, about something as strange and mystical as intelligence, yet, when it comes to making the world a better place, which of them many seem very concerned about in relation to their endeavours, they can barely raise an eyebrow beyond a basic income.

    Again, it is crumbs. Always crumbs. Throw the bewildered herd crumbs when they work, then throw them some more when they lose it. Be it the welfare state or the beyond welfare state of a basic income.

    A basic income...the lazy persons solution to economic woe.

    We can spend huge amounts of money trying to get a car to drive itself, get a rocket to land upright or to get some machine to merely mimic intelligence and arrogantly call it that. We can have huge numbers of people working on technological advance, so many embedded in private tyrannies run the same way the Soviet Union was, all happily going about their business, with huge smiles on their faces, feeling they are on the front line of improving the lives of all, yet not a one of them would consider the idea that maybe, just maybe, the institutional structure of our economy is at its root rotten and requires a complete overhaul. And even if it be pointed out to many, as Noam Chomsky has been doing for six decades, they would probably recoil from any such thought, most likely with the arrogant retort that reorganising the economy to be more equitable, fair and just would be too difficult if not beyond human endeavour, with a few, no doubt, reminding us all that socialism doesn’t work because it was tried in the 20th cent and failed.

    But what the fuck would I know. I know they are all nice people and love their families and friends, just as Noam says...it’s just when you’re working inside these private tyrannies you have no choice but to behave certain ways.

  • Boulder Dash 15th Mar 2019

    “However, it’s also worth pointing out that it’s very unlikely that there will ever be a point where machines are comparable to human beings in the following sense. As soon as machines can read, then a machine can basically read all the books ever written; and no human can read even a tiny fraction of all the books that have ever been written. Therefore, once an AGI gets past kindergarten reading level, it will shoot beyond anything that any human being has ever done, and it will have a much bigger knowledge base than any human ever has.” (Stuart R Russell in Architects of Intelligence:The Truth About AI and the People Who build It.)

    The question is, how does this AGI get access to the books when it doesn’t have money or a library membership card?

    So Google just gives it access to all the books in the world for free after it’s finished uploading them all but the rest of us have to fuckin’ buy them or borrow them.

    Typical!

    • Boulder Dash 15th Mar 2019

      I know there are a lot of books for free, PDFs and open source access, but shit, not every book. So why should it get easy access (it can be time consuming searching for ‘me) to all the books before all the people who’ve had their jobs taken over by robots huh? Fuck the AGI thingy, let the people read first, motherfuckers.

    • Irie Zen 19th Mar 2019

      [..]

  • Boulder Dash 16th Mar 2019

    “MARTIN FORD: Do you think we can navigate as individuals and as a species towards a positive future, once AI has changed our economy?

    STUART J. RUSSELL: Yes, I really do, but I think that a positive future will require human intervention to help people live positive lives. We need to start actively navigating, right now, towards a future that can present the most constructive challenges and the most interesting experiences in life for people. A world that can build emotional resilience and nurture a generally constructive and positive attitude to one’s own life—and to the lives of others. At the moment, we are pretty terrible at doing that. So, we have to start changing that now. I think that we’ll also need to fundamentally change our attitude about what science is for and what it can do for us. I have a cell phone in my pocket, and the human race probably spent on the order of a trillion dollars on the science and engineering that went into ultimately creating things like my cell phone. And yet we spend almost nothing on understanding how people can live interesting and fulfilling lives, and how we can help people around us do that.

    I think as a race that we will need to start acknowledging that if we help another person in the right way, it creates enormous value for them for the rest of their lives. Right now, we have almost no science base for how to do this, we have no degree programs in how to do it, we have very few journals about it, and those that are trying are not taken very seriously.

    The future can have a perfectly functioning economy where people who are expert in living life well, and helping other people, can provide those kinds of services. Those services may be coaching, they may be teaching, they may be consoling, or maybe collaborating, so that we can all really have a fantastic future.

    It’s not a grim future at all: it’s a far better future than what we have at present; but it requires rethinking our education system, our science base, our economic structures.

    We need now to understand how this will function from an economic point of view in terms of the future distribution of income. We want to avoid a situation where there are the super-rich who own the means of production—the robots and the AI systems—and then there are their servants, and then there is the rest of the world doing nothing. That’s sort of the worst possible outcome from an economic point of view.

    So, I do think that there is a positive future that makes sense once AI has changed the human economy, but we need to get a better handle on what that’s going to look like now, so that we can construct a plan for getting there.” (Architects of Intelligence)

    There ya go...economic structures need looking at...and now even...well...?

  • Dave Jones 16th Mar 2019

    I'm assuming I'm not a robot but how would I really know?

    • Irie Zen 16th Mar 2019

      THE HUMANNESS TEST


      No, it's not a joke. This is a test designed to help humanity cope with a serious problem, one that is becoming more of a concern every day: On the phone, over the Internet, and even in person, are you dealing with a human, a computer, a robot, or an alien?


      And are you really a human, or have you been replaced by a robot, or even by an alien, without you knowing it? Has your brain been tampered with by aliens, or maybe by secret government agencies, so that you are no longer as human as you used to be?


      Just how human are you? That is the question.


      Sure, you're thinking, "No sweat!" You're as human as apple pie, right? But this is a difficult test, full of subtleties designed to ferret out the hidden truth - to separate the men from the toys, so to speak. If you're willing to put your humanness to the test, get ready to rumble. And if you don't have the stomach - assuming, that is, that you even have a stomach - to find out that you're not as human as you thought you were - that chemicals in your food, invisible mind control devices, or an alien abduction that you can't even remember has taken away some of your humanness, too bad! Suck it up!


      And if you are not a human, beware. You will fail this test, and we will find you and dissect or dismantle you, whichever seems more diabolical at the time.


      https://howhumanareyou.com/

    • Boulder Dash 16th Mar 2019

      Bertrand Russell (1926)

      Theory of Knowledge
      for The Encyclopaedia Britannica)

      THEORY OF KNOWLEDGE is a product of doubt. When we have asked ourselves seriously whether we really know anything at all, we are naturally led into an examination of knowing, in the hope of being able to distinguish trustworthy beliefs from such as are untrustworthy. Thus Kant, the founder of modern theory of knowledge, represents a natural reaction against Hume's scepticism. Few philosophers nowadays would assign to this subject quite such a fundamental importance as it had in Kant's "critical" system; nevertheless it remains an essential part of philosophy. It is perhaps unwise to begin with a definition of the subject, since, as elsewhere in philosophical discussions, definitions are controversial, and will necessarily differ for different schools; but we may at least say that the subject is concerned with the general conditions of knowledge, in so far as they throw light upon truth and falsehood.

      It will be convenient to divide our discussion into three stages, concerning respectively (1) the definition of knowledge, (2) data, (3) methods of inference. It should be said, however, that in distinguishing between data and inferences we are already taking sides on a debatable question, since some philosophers hold that this distinction is illusory, all knowledge being (according to them) partly immediate and partly derivative.

      I. THE DEFINITION OF KNOWLEDGE

      The question how knowledge should be defined is perhaps the most important and difficult of the three with which we shall deal. This may seem surprising: at first sight it might be thought that knowledge might be defined as belief which is in agreement with the facts. The trouble is that no one knows what a belief is, no one knows what a fact is, and no one knows what sort of agreement between them would make a belief true. Let us begin with belief.

      Belief.

      Traditionally, a "belief" is a state of mind of a certain sort. But the behaviourists deny that there are states of mind, or at least that they can be known; they therefore avoid the word "belief", and, if they used it, would mean by it a characteristic of bodily behaviour. There are cases in which this usage would be quite in accordance with common sense. Suppose you set out to visit a friend whom you have often visited before, but on arriving at your destination you find that he has moved, you would say "I thought he was still living at his old house." Yet it is highly probable that you did not think about it at all, but merely pursued the usual route from habit. A "thought" or "belief" may, therefore, in the view of common sense, be shown by behaviour, without any corresponding "mental" occurrence. And even if you use a form of words such as is supposed to express belief, you are still engaged in bodily behaviour, provided you pronounce the words out loud or to yourself. Shall we say, in such cases, that you have a belief? Or is something further required?

      It must be admitted that behaviour is practically the same whether you have an explicit belief or not. People who are out of doors when a shower of rain comes on put up their umbrellas, if they have them; some say to themselves "it has begun to rain", others act without explicit thought, but the result is exactly the same in both cases. In very hot weather, both human beings and animals go out of the sun into the shade, if they can; human beings may have an explicit "belief " that the shade is pleasanter, but animals equally seek the shade. It would seem, therefore, that belief, if it is not a mere characteristic of behaviour, is causally unimportant. And the distinction of truth and error exists where there is behaviour without explicit belief, just as much as where explicit belief is present; this is shown by the illustration of going to where your friend used to live. Therefore, if theory of knowledge is to be concerned with distinguishing truth from error, we shall have to include the cases in which there is no explicit belief, and say that a belief may be merely implicit in behaviour. When old Mother Hubbard went to the cupboard, she "believed" that there was a bone there, even if she had no state of mind which could be called cognitive in the sense of introspective psychology.

      Words.

      In order to bring this view into harmony with the facts of human behaviour, it is of course necessary to take account of the influence of words. The beast that desires shade on a hot day is attracted by the sight of darkness; the man can pronounce the word "shade", and ask where it is to be found. According to the behaviourists, it is the use of words and their efficacy in producing conditional responses that constitutes "thinking". I It is unnecessary for our purposes to inquire whether this view gives the whole truth about the matter. What it is important to realise is that verbal behaviour has the characteristics which lead us to regard it as pre-eminently a mark of "belief", even when the words are repeated as a mere bodily habit. Just as the habit of going to a certain house when you wish to see your friend may be said to show that you "believe" he lives in that house, so the habit of saying "two and two are four", even when merely verbal, must be held to constitute "belief " in this arithmetical proposition. Verbal habits are, of course, not infallible evidences of belief. We may say every Sunday that we are miserable sinners, while really thinking very well of ourselves. Nevertheless, speaking broadly, verbal habits crystallise our beliefs, and afford the most convenient way of making them explicit. To say more for words is to fall into that superstitious reverence for them which has been the bane of philosophy throughout its history.

      Belief and Behaviour

      We are thus driven to the view that, if a belief is to be something causally important, it must be defined as a characteristic of behaviour. This view is also forced upon us by the consideration of truth and falsehood, for behaviour may be mistaken in just the way attributable to a false belief, even when no explicit belief is present-for example, when a man continues to hold up his umbrella after the rain has stopped without definitely entertaining the opinion that it is still raining. Belief in this wider sense may be attributed to animals-for example, to a dog who runs to the dining-room when he hears the gong. And when an animal behaves to a reflection in a looking-glass as if it were "real", we should naturally say that he "believes" there is another animal there; this form of words is permitted by our definition.

      It remains, however, to say what characteristics of behaviour can be described as beliefs. Both human beings and animals act so as to achieve certain results, e.g. getting food. Sometimes they succeed, sometimes they fail-, when they succeed, their relevant beliefs are "true", but when they fail, at least one is false. There will usually be several beliefs involved in a given piece of behaviour, and variations of environment will be necessary to disentangle the causal characteristics which constitute the various beliefs. This analysis is effected by language, but would be very difficult if applied to dumb animals. A sentence may be taken as a law of behaviour in any environment containing certain characteristics; it will be "true" if the behaviour leads to results satisfactory to the person concerned, and otherwise it will be "false". Such, at least, is the pragmatist definition of truth and falsehood.

      Truth in Logic.

      There is also, however, a more logical method of discussing this question. In logic, we take for granted that a word has a "meaning"; what we signify by this can, I think, only be explained in behaviouristic terms, but when once we have acquired a vocabulary of words which have "meaning", we can proceed in a formal manner without needing to remember what "meaning" is. Given the laws of syntax in the language we are using, we can construct propositions by putting together the words of the language, and these propositions have meanings which result from those of the separate words and are no longer arbitrary. If we know that certain of these propositions are true, we can infer that certain others are true, and that vet others are false; sometimes this can be inferred with certainty, sometimes with greater or less probability. In all this logical manipulation, it is unnecessary to remember what constitutes meaning and what constitutes truth or falsehood. It is in this formal region that most philosophy has lived- and within this region a great deal can be said that is both true and important, without the need of' any fundamental doctrine about meaning. It even seems possible to define "truth" in terms of "meaning" and "fact", as opposed to the pragmatic definition which we gave a moment ago. If so, there will be two valid definitions of "truth", though of course both will apply to the same propositions.

      The purely formal definition of "truth" may be illustrated by a simple case. The word "Plato" means a certain man; the word "Socrates" means a certain other man; the word "love" means a certain relation. This being given, the meaning of the complex symbol "Plato loves Socrates" is fixed; we say that this complex symbol is "true" if there is a certain fact in the world, namely the fact that Plato loves Socrates, and in the contrary case the complex symbol is false. I do not think this account is false, but, like everything purely formal, it does not probe very deep.

      Uncertainty and Vagueness.

      In defining knowledge, there are two further matters to be taken into consideration, namely the degree of certainty and the degree of precision. All knowledge is more or less uncertain and more or less vague. These are, in a sense, opposing characters: vague knowledge has more likelihood of truth than precise knowledge, but is less useful. One of the aims of science is to increase precision without diminishing certainty. But we cannot confine the word "knowledge" to what has the highest degree of both these qualities; we must include some propositions that are rather vague and some that are only rather probable. It Is important, however, to indicate vagueness and uncertainty where they are present, and, if possible, to estimate their degree. Where this can be done precisely, it becomes "probable error" and "probability". But in most cases precision in this respect is impossible.

      II. THE DATA

      In advanced scientific knowledge, the distinction between what is a datum and what is inferred is clear in fact, though sometimes difficult in theory. In astronomy, for instance, the data are mainly certain black and white patterns on photographic plates. These are called photographs of this or that part of the heavens, but of course much inference is involved in using them to give knowledge about stars or planets. Broadly speaking, quite different methods and a quite different type of skill are required for the observations which provide the data in a quantitative science, and for the deductions by which the data are shown to support this or that theory. There would be no reason to expect Einstein to be particularly good at photographing the stars near the sun during an eclipse. But although the distinction is practically obvious in such cases, It is far less so when we come to less exact knowledge. It may be said that the separation into data and inferences belongs to a well-developed stage of knowledge, and is absent in its beginnings.

      Animal Inference.

      But just as we found it necessary to admit that knowledge may be only a characteristic of behaviour, so we shall have to say about inference. What a logician recognises as inference is a refined operation, belonging to a high degree of intellectual development; but there is another kind of inference which is practised even by animals. We must consider this primitive form of inference before we can become clear as to what we mean by "data".

      When a dog hears the gong and immediately goes into the dining-room, he is obviously, in a sense, practising inference. That is to say, his response is appropriate, not to the noise of the gong in itself, but to that of which the noise is a sign: his reaction is essentially similar to our reactions to words. An animal has the characteristic that, when two stimuli have been experienced together, one tends to call out the response which only the other could formerly call out. If the stimuli (or one of them) are emotionally powerful, one joint experience may be enough-, if not, many joint experiences may be required. This characteristic is totally absent in machines. Suppose, for instance, that you went every day for a year to a certain automatic machine, and lit a match in front of it at the same moment at which you inserted a penny-, it would not, at the end, have any tendency to give up its chocolate on the mere sight of a burning match. That is to say, machines do not display inference even in the form in which it is a mere characteristic of behaviour. Explicit inference, such as human beings practise, is a rationalising of the behaviour which we share with the animals. Having experienced A and B together frequently, we now react to A as we originally reacted to B. To make this seem rational, we say that A is a "sign" of B, and that B must really be present though out of sight. This is the principle of induction, upon which almost all science is based. And a great deal of philosophy is an attempt to make the principle seem reasonable.

      Whenever, owing to past experience, we react to A in the manner in which we originally reacted to B, we may say that A is a "datum" and B is "Inferred". In this sense, animals practise inference. It is clear, also, that much inference of this sort is fallacious: the conjunction of A and B in past experience may have been accidental. What is less clear is that there is any way of refining this type of inference which will make it valid. That, however, is a question which we shall consider later. What I want consider now is the nature of those elements in our experiences which, to a reflective analysis, appear as "data" in the above-defined sense.

      Mental and Physical Data.

      Traditionally, there are two sorts of data, one physical, derived from the senses, the other mental, derived from introspection. It seems highly questionable whether this distinction can be validly made among data; it seems rather to belong to what is inferred from them. Suppose, for the sake of definiteness, that you are looking at a white triangle drawn on a black-board. You can make the two judgments: "There is a triangle there", and "I see a triangle." These are different propositions, but neither expresses a bare datum; the bare datum seems to be the same in both propositions. To illustrate the difference of the propositions: you might say "There is a triangle there", if you had seen it a moment ago but now had your eyes shut, and in this case you would not say "I see a triangle"; on the other hand, you might see a black dot which you knew to be due to indigestion or fatigue, and in this case you would not say "There is a black dot there." In the first of these cases, you have a clear case of inference, not of a datum.

      In the second case, you refuse to infer a public object, open to the observation of others. This shows that "I see a triangle" comes nearer to being a datum than "There is a triangle there." But the words "I" and "see" both involve inferences, and cannot be included in any form of words which aims at expressing a bare datum. The word "I" derives its meaning, partly, from memory and expectation, since I do not exist only at one moment. And the word "see" is a causal word, suggesting dependence upon the eyes; this involves experience, since a new-born baby does not know that what it sees depends upon its eyes. However, we can eliminate this dependence upon experience, since obviously all seen objects have a common quality, not belonging to auditory or tactual or any other objects. Let us call this quality that of being "visual". Then we can say: "There is a visual triangle." This is about as near as we can get in words to the datum for both propositions: "There is a triangle there", and "I see a triangle." The difference between the propositions results from different inferences: in the first, to the public world of physics, involving perceptions of others; in the second, to the whole of my experience, in which the visual triangle is an element. The difference between the physical and the mental, therefore, would seem to belong to inferences and constructions, not to data.

      It would thus seem that data, in the sense in which we are using the word, consist of brief events, rousing in us various reactions, some of which may be called "inferences", or may at least be said to show the presence of inference. The two-fold organisation of these events, on the one hand as constituents of the public world of physics, on the other hand as parts of a personal experience, belongs to what is inferred, not to what is given. For theory of knowledge, the question of the validity of inference is vital. Unfortunately, nothing very satisfactory can be said about it, and the most careful discussions have been the most sceptical. However, let us examine the matter without prejudice.

      III. METHODS OF INFERENCE

      It is customary to distinguish two kinds of inference, Deduction and Induction. Deduction is obviously of great practical importance, since it embraces the whole of mathematics. But it may be questioned whether it is, in any strict sense, a form of inference at all. A pure deduction consists merely of saying the same thing in another way. Application to a particular case may have importance, because we bring in the experience that there is such a case-for example, when we infer that Socrates is mortal because all men are mortal. But in this case we have brought in a new piece of experience, not involved in the abstract deductive schema. In pure deduction, we deal with x and y not with empirically given objects such as Socrates and Plato. However this may be, pure deduction does not raise the problems which are of most importance for theory of knowledge, and we may therefore pass it by.

      Induction.

      The important forms of inference for theory of knowledge are those in which we infer the existence of something having certain characteristics from the existence of something having certain other characteristics. For example: you read in the newspaper that a certain eminent man is dead, and you infer that he is dead. Sometimes, of course, the inference is mistaken. I have read accounts of my own death in newspapers, but I abstained from inferring that I was a ghost. In general, however, such inferences are essential to the conduct of life. Imagine the life of a sceptic who doubted the accuracy of the telephone book, or, when he received a letter, considered seriously the possibility that the black marks might have been made accidentally by an inky fly crawling over the paper. We have to accept merely probable knowledge in daily life, and theory of knowledge must help us to decide when it really is probable, and not mere animal prejudice.

      Probability.

      Far the most adequate discussion of the type of inference we are considering is obtained in J. M. Keynes's Treatise on Probability (1921). So superior is his work to that of his predecessors that it renders consideration of them unnecessary. Mr. Keynes considers induction and analogy together, and regards the latter as the basis of the former. The bare essence of an inference by analogy is as follows: We have found a number of instances in which two characteristics are combined, and no instances in which they are not combined; we find a new instance in which we know that one of the characteristics is present, but do not know whether the other is present or absent; we argue by analogy that probably the other characteristic is also present. The degree of probability which we infer will vary according to various circumstances. It is undeniable that we do make such inferences, and that neither science nor daily life would be possible without them. The question for the logician is as to their validity. Are they valid always, never or sometimes? And in the last case, can we decide when they are valid?

      Limitation of Variety.

      Mr. Keynes considers that mere increase in the number of instances in which two qualities are found together does not do much to strengthen the probability of their being found together in other instances. The important point, according to him, is that in the known cases the instances should have as few other qualities in common as possible. But even then a further assumption is required, which is called the principle of limitation of variety. This assumption is stated as follows : "That the objects in the field, over which our generalisations extend, do not have an infinite number of independent qualities; that, in other words, their characteristics, however numerous, cohere together in groups of invariable connection, which are finite in number." It is not necessary to regard this assumption as certain; it is enough if there is some finite probability in its favour.

      It is not easy to find any arguments for or against an a priori finite probability in favour of the limitation of variety. It should be observed, however, that a "finite" probability, in Mr. Keynes's terminology, means a probability greater than some numerically measurable probability, e.g. the probability of a penny coming "heads" a million times running. When this is realised, the assumption certainly seems plausible. The strongest argument on the side of scepticism is that both men and animals are constantly led to beliefs (in the behaviouristic sense), which are caused by what may be called invalid inductions; this happens whenever some accidental collocation has produced an association not in accordance with any objective law. Dr. Watson caused an infant to be terrified of white rats by beating a gong behind its head at the moment of showing it a white rat (Behaviourism). On the whole, however, accidental collocations will usually tend to be different for different people, and therefore the inductions in which men are agreed have a good chance of being valid. Scientific inductive or analogical inferences may, in the best cases, be assumed to have a high degree of probability, if the above principle of limitation of variety is true or finitely probable. This result is not so definite as we could wish, but it is at least preferable to Hume's complete scepticism. And it is not obtained, like Kant's answer to Hume, by a philosophy ad hoc; it proceeds on the ordinary lines of scientific method.

      Grades of Certainty.

      Theory of knowledge, as we have seen, is a subject which is partly logical, partly psychological; the connection between these parts is not very close. The logical part may, perhaps, come to be mainly an organisation of what passes for knowledge according to differing grades of certainty: some portions of our beliefs involve more dubious assumptions than are involved in other parts. Logic and mathematics on the one hand, and the facts of perception on the other, have the highest grade of certainty; where memory comes in, the certainty is lessened; where unobserved matter comes in, the certainty is further lessened; beyond all these stages comes what a cautious man of science would admit to be doubtful. The attempt to increase scientific certainty by means of some special philosophy seems hopeless, since, in view of the disagreement of philosophers, philosophical propositions must count as among the most doubtful of those to which serious students give an unqualified assent. For this reason, we have confined ourselves to discussions which do not assume any definite position on philosophical as opposed to scientific questions.

    • Irie Zen 19th Mar 2019

      [..]

  • Boulder Dash 16th Mar 2019

    https://m.youtube.com/watch?v=HipTO_7mUOw&vl=en

    https://m.youtube.com/watch?v=DuABc9ZNtrA

  • Boulder Dash 16th Mar 2019

    Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?'

    Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks, says a group of leading scientists Stephen Hawking , STUART RUSSELL , Max Tegmark , Frank Wilczek

    Thursday 1 May 2014 21:30


    With the Hollywood blockbuster Transcendence playing in cinemas, with Johnny Depp and Morgan Freeman showcasing clashing visions for the future of humanity, it's tempting to dismiss the notion of highly intelligent machines as mere science fiction. But this would be a mistake, and potentially our worst mistake in history.
    Artificial-intelligence (AI) research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy! and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring.

    The potential benefits are huge; everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone's list. Success in creating AI would be the biggest event in human history.

    Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets; the UN and Human Rights Watch have advocated a treaty banning such weapons. In the medium term, as emphasised by Erik Brynjolfsson and Andrew McAfee in The Second Machine Age, AI may transform our economy to bring both great wealth and great dislocation.

    Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains. An explosive transition is possible, although it might play out differently from in the movie: as Irving Good realised in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a "singularity" and Johnny Depp's movie character calls "transcendence".

    One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

    So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"? Probably not – but this is more or less what is happening with AI. Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes such as the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.

    Stephen Hawking is the director of research at the Department of Applied Mathematics and Theoretical Physics at Cambridge and a 2012 Fundamental Physics Prize laureate for his work on quantum gravity. Stuart Russell is a computer-science professor at the University of California, Berkeley and a co-author of 'Artificial Intelligence: A Modern Approach'. Max Tegmark is a physics professor at the Massachusetts Institute of Technology (MIT) and the author of 'Our Mathematical Universe'. Frank Wilczek is a physics professor at the MIT and a 2004 Nobel laureate for his work on the strong nuclear force.

  • Boulder Dash 16th Mar 2019

    “STUART J. RUSSELL: So, it’s an interesting story. It started when I got a call from National Public Radio, who wanted to interview me about this movie called Transcendence. I was living in Paris at the time and the movie wasn’t out in Paris, so I hadn’t seen it yet.

    I happened to have a stopover in Boston on the way back from a conference in Iceland, so I got off the plane in Boston and I went to the movie theatre to watch the movie. I’m sitting there towards the front of the theatre, and I don’t really know what’s going to happen in the movie at all and then, “Oh, look! It’s showing Berkeley computer science department. That’s kind of funny.” Johnny Depp is playing the AI professor, “Oh, that’s kind of interesting.” He’s giving a talk about AI, and then someone, some anti-AI terrorist decides to shoot him. So, I’m sort of involuntarily shrinking down in my seat seeing this happening, because that could really be me at that time. Then the basic plot of the movie is that before he dies they manage to upload his brain into a big quantum computer and the combination of those two things creates a super-intelligent entity that threatens to take over the world because it very rapidly develops all kinds of amazing new technologies.

    So anyway, we wrote an article that was, at least superficially, a review of the movie, but it was really saying, “You know, although this is just a movie, the underlying message is real: which is that if—or when—we create machines that can have a dominant effect on the real world, then that can present a very serious problem for us: that we could, in fact, cede control over our futures to other entities besides humans.”

    The problem is very straightforward: our intelligence is what gives us our ability to control the world; and so, intelligence represents power over the world. If something has a greater degree of intelligence, then it has more power.

    We are already on the way to creating things that are much more powerful than us; but somehow, we have to make sure that they never, ever, have any power. So, when we describe the AI situation like that, people say, “Oh, I see. OK, there’s a problem.””Stuart Russell, Architects of Intelligence)

    Ok, who has real power now? Who makes all or most decisions now. The State? Huge private tyrannies like Google? Are they separate...the State and private tyrannies?

    We’ve already created entities, both people and abstract ones like private tyrannies hugely more powerful than most or all. So making sure they don’t get it has past/passed/(parsed). It’s about removing it. But when we describe this situation to others, in regard to economic and political structures, they, or many or some, can see there’s a problem but as one we can do little about...few tweaks here and there and becoming better people. Rather, real problems are something that only technology can solve or superintelligent artificial beings that, if controlled properly, may deliver on Utopia.

    You know, the end of history and all that.

    So I guess we wait for the Stuart Russell’s of the world, but not the Russell’s of a comedic type, to wrench us out of suffering.

    There is suffering.
    There is the path that leads to suffering.
    There is the cessation of suffering.
    There is the path that leads to the cessation of suffering.

    That path is technology and AI. Woohoo!

  • Boulder Dash 16th Mar 2019

    “I’ll give you an example to demonstrate this margin of safety that we really do need. Let’s go back to an old idea that we can—if we ever need to—just switch the machine off if we get into trouble. Well, of course, you know, if the machine has an objective like, “fetch the coffee,” then obviously a sufficiently intelligent machine realizes that if someone switches it off, then it’s not going to be able to fetch the coffee. If its life’s mission, if its objective, is to fetch the coffee, then logically it will take steps to prevent itself from being switched off. It will disable the Off switch. It will possibly neutralize anyone who might attempt to switch it off. So, you can imagine all these unanticipated consequences of a simple objective like “fetch the coffee,” when you have a sufficiently intelligent machine.

    Now in my vision for AI, we instead design the machine so that although it still wants to “fetch the coffee” it understands that there are a lot of other things that human beings might care about, but it doesn’t really know what those are! In that situation, the AI understands that it might do something that the human doesn’t like—and if the human switches it off, that’s to prevent something that would make the human unhappy. Since in this vision the goal of the machine is to avoid making the human unhappy, even though the AI doesn’t know what that means, it actually has an incentive to allow itself to be switched off.” (Stu Russell, Architects of Intelligence)

    The above is interesting. It’s the kind of logical reasoning Albert often applies to institutions like corporate divisions of labour...if you have x you will get y but you may not want y, but rather z. It’s the kind of thing that many critics have applied to market imperatives and capitalist laws of motion...of course including the most well known, Karl.

    This idea of the design of something like an economy that may cause anti-social behaviour is well known to critics of markets and capitalism. It’s not that people are necessarily bad Noam says actually, but rather the design of the institutions they spend most of their lives embedded in, and hence why they do shifty things that have shitty effects on many of their fellow humans.

    So, following Stu’s reasoning and logic, it would seem that the institutional structures that drive anti-social behaviour, that drive the homogenisation of creative thought, that destroy participatory decision making, that create huge disparities of wealth resulting in poverty and death, MUST be considered and altered in order that they foster far better outcomes and relations. So you design them with a built in understanding of consequences based on a set of desired values.

    Sam ‘The Worlds Most deceptive Compatibilist’ Harris is very concerned about AI, and regularly talks about human flourishing and optimising human well being, often to much applause, but very very rarely, if ever does he touch on the devastating effects of our current economy and it’s institutional structure let alone ever suggest an alternative. To him, and many at the Intellectual - It’s Only A Joke Name - Dark Web, that would be tantamount to an association with the “putrid effusions of Noam Chomsky”, social justice warriors, post-modern nutters, the gulags and vegans...and we couldn’t have that now could we?

    https://m.youtube.com/watch?v=yFhvgPhV4h4

  • Boulder Dash 16th Mar 2019

    “MARTIN FORD: Think of it in terms of passing the Turing test, and not for five minutes but for two hours, so that you can have a wide-ranging conversation that’s as good as a human being. Is that feasible, whether it’s one system or some community of systems?

    GEOFFREY HINTON: I think there’s a reasonable amount of probability that it will happen in somewhere between 10 and 100 years. I think there’s a very small probability, it’ll happen before the end of the next decade, and I think there’s also a big probability that humanity gets wiped out by other things before the next 100 years occurs.”(AoI)

  • Boulder Dash 16th Mar 2019

    “MARTIN FORD: Speaking of Canadians, do you have any interaction with your fellow faculty member, Jordan Peterson? It seems like there’s all kinds of disruption coming out of the University of Toronto...

    GEOFFREY HINTON: Ha! Well, all I’ll say about that is that he’s someone who doesn’t know when to keep his mouth shut.”

    Now there ya go!

  • Boulder Dash 17th Mar 2019

    “INNOVATION

    Do individuals in a participatory economy have an incentive to search for innovations, and do workers councils have an incentive to implement productive innovations once they’re found?

    A participatory economy does not reward those who discover productive innovations with vastly greater consumption rights than others that make comparable personal sacrifices or effort in their work. Instead a participatory economy emphases direct social recognition of outstanding achievements. This is for a number of reasons. First, successful innovation is almost always the result of cumulative human creativity and not a single person’s endeavours. Also, an individual’s contribution is often the result of genius and/or luck as much as personal sacrifice, all of which implies that recognizing innovation through social esteem instead of material reward is more ethical. Second, social incentives will not necessarily prove less powerful than material ones. No economy ever has paid innovators the full social value of their innovations because if it did, there would be little left for those who apply them over long periods of time. This means if material compensation was the only reward, innovation would be under stimulated in any case. Material reward is often merely an imperfect substitute for something else that is truly desired — social esteem. Actual policy in a participatory economy would ultimately be settled democratically in light of results.

    However, there are material incentives to implement socially useful innovations in a participatory economy. Any change that increases the social benefits of the outputs that workers produce, or reduces the social costs of the inputs they use will increase the workers council’s social benefit to social cost ratio. This makes it easier for the council to get its proposals accepted in the participatory planning process, can allow workers to reduce their effort, can permit them to improve the quality of their work life, or can raise the average effort rating (i.e income) the council can award its members. But just as in capitalism, adjustments will make any advantage temporary. As the innovation spreads to other enterprises, and as indicative prices change, the full social benefits of their innovation will be both realized and spread to all workers and consumers.

    The faster the adjustments are made, the more efficient and equitable the outcome. On the other hand, the more rapid the adjustments, the less the “material incentive” to innovate and the greater the incentive to “ride for free” on the innovations of others. A participatory economy is better equipped to manage this tradeoff compared to a capitalistic economy. Most importantly, in a participatory economy “service to society” is recognised directly and is therefore a stronger incentive to innovation. This means that more innovation will occur in a participatory economy than in capitalism for the same speed of adjustments. Secondly, research and development (R&D) is largely a public good which usually is undersupplied in a market economy, whereas a participatory economy allocates resources to the production of public goods just as easily as to the production of private goods. Finally, in capitalism the only mechanism for providing incentives for innovation is to slow down their spread, at the expense of efficiency. This is done by making the transaction costs of registering patents and negotiating licenses from patent holders very high. While it is recommended only as a last resort, the transaction costs of granting extra consumption rights for a limited period of time would be negligible in a participatory economy.”

    • Boulder Dash 17th Mar 2019

      The above short piece only because many of the online books at ZNet aren’t coming up.

    • Boulder Dash 17th Mar 2019

      Robin Hahnel, from Economic Democracy. The similarities with the above, from the Participatory Economics website are obvious...parts lifted directly it seems.

      “Dynamic Efficiency

      Proponents of new models of a socialist economy which seek to combine economic planning with wide participation in decision-making emphasize the potential superiority of their systems over other systems at meeting human needs. However, the claim of superiority has been typically cast in a static framework that largely overlooks the performance of participatory planning in the most important dynamic aspect of economic life: technical change and the process which brings it about—innovation. Does the system provide strong incentives for innovation? Does the system provide substantial means for carrying out innovation? Does the system generate innovative effort that contributes effectively to the improvement of human welfare? 18 —David Kotz

      Even if there are incentives to work hard and smart, even if there are incentives to educate and train oneself to be more socially useful, and even if incentives are compatible with an efficient allocation of scarce productive resources at any point in time, this does not guarantee dynamic efficiency. Do individuals have an incentive to search for innovations, and do worker councils have an incentive and means to implement productive innovations once they are found? These are important questions since even when people come to recognize that environmentally and socially destructive growth is no longer in their interests, raising living standards for today’s disadvantaged, reducing everyone’s work time, improving the quality of everyone’s work lives, and restoring the natural environment will require a great deal of innovation.

      Supporters of participatory economics do not support rewarding those who succeed in discovering productive innovations with vastly greater consumption rights than others who make equivalent personal sacrifices in work. Instead we recommend emphasizing social recognition of outstanding achievements for a variety of reasons. First, successful innovation is often the outcome of cumulative human creativity for which a single individual is rarely responsible. Second, an individual’s contribution is often the product of genius and luck as much as effort, which implies that recognizing innovation through social esteem rather than material reward is superior on ethical grounds. Third, we are not convinced that social incentives, when tried, will prove less powerful than material ones. It should be recognized that no economy ever has, or could pay innovators the full social value of their innovations. If it did, there would be no benefit left to those who apply them! This means if material compensation is the only reward, innovation will be understimulated in any case. Moreover, often material reward is merely an imperfect substitute for what is truly desired: social esteem. How else can one explain why those who already have more wealth than they, their children, and their children’s children can possibly consume continue to strive to accumulate more? In any case, these are the opinions of those who advocate replacing capitalism with participatory economics. Actual policy in a participatory economy would be settled democratically by its members in light of results.

      Nor do we see why critics believe there would be insufficient incentives for enterprises to seek and implement innovations, unless they measure a participatory economy against a mythical and misleading image of capitalism. Sometimes supporters of capitalism presume that innovating capitalist enterprises capture the full benefits of their successes, while it is also assumed that innovations spread instantaneously to all enterprises in an industry. When made explicit it is obvious these assumptions are contradictory. Yet only if both assumptions hold can one conclude that capitalism provides maximum material stimulus to innovation and achieves technological efficiency throughout the economy. In reality innovative capitalist enterprises temporarily capture “super profits,” which are competed away more or less rapidly depending on a host of circumstances including industry structure, barriers to entry, patent laws, and how vigorously intellectual property rights are enforced. This means that in reality there is an unavoidable trade-off in capitalist economies between stimulus to innovation and the rapid spread of innovations, a trade-off between dynamic and static efficiency.

      In a participatory economy all innovations will immediately be made available to all enterprises, so there will never be any loss of static efficiency. And while nonmaterial incentives for innovative firms are emphasized, material incentives are available if necessary without sacrificing static efficiency. 19 There are strong incentives for worker councils to search for innovations that increase the social benefits of their outputs, or reduce the social costs of their inputs since this would increase the worker council’s social benefit to social cost ratio. Raising the social benefit-to-social cost ratio makes it easier for the council to get its proposals accepted in the participatory-planning process, can allow workers to reduce their efforts, can permit them to improve the quality of their work lives, or can raise the average effort rating the council can award its members. However, it is true that the rapid spread of innovations in a participatory economy will render these advantages temporary. As the innovation spreads to other enterprises, estimates of social opportunity costs will change, job complexes will be rebalanced across enterprises and industries, and the social benefits of innovations as they are realized will be spread to all workers and consumers. So what will curb the incentive to “free ride” on the innovations of others if material benefits to innovating enterprises disappear so quickly in a participatory economy?

      First, recognition of “social serviceability” is a more powerful incentive to innovation in a participatory economy where acquisition of personal wealth is both less necessary and less likely to elicit social esteem. Second, a participatory economy is better suited to allocating sufficient resources to research and development because research and development is largely a public good that is predictably undersupplied in market economies but not discriminated against by participatory planning. Third, while we recommend it only as a last resort, there are no reasons in a participatory economy that the recalibration of work complexes for innovative workplaces cannot be delayed, or extra consumption allowances for workers in innovative enterprises cannot be granted for some period of time if members of a participatory economy decide greater material rewards for innovative enterprises are necessary to achieve desirable rates of technical progress.”

    • Irie Zen 19th Mar 2019

      [..]

  • Boulder Dash 17th Mar 2019

    A refresher course. Imagine if you will, AI being developed inside a Parecon. Imagine!

    “Participatory Economics Part 1: Origins, Heritage, Substance 3Comments
    I have been asked to write an essay that presents the origins and heritage of participatory economics, explains the logical basis for its components, presents and addresses peoples' problems with it, and finally makes a case for why diverse people with diverse agendas should care about it. Even seeking brevity, this will require three parts, this being the first.

    Watercolor by James G. Swan depicting the Klallam people of chief Chetzemoka at Port Townsend, with one of Chetzemoka
    What is Participatory Economics and where did it come from?

    Participatory Economics or Parecon, is a proposal for life after capitalism. The first labor strike was reputedly undertaken by Egyptian slaves angered at a Pharaoh who moved from requiring six days slaving a week building pyramids to requiring seven, and from providing lunch to providing nothing. Participatory economics owes every essay, speech, and book, and every activist project and movement struggle since then, or even earlier, that has shed light on the meaning and practice of classlessness. Kropotkin, Rocker, Bakunin, Luxembourg, Pannekoek, Goldman, Ehrenreich, and Chomsky are among its major inspirations. In accord, in a participatory economy no owners and no other class dominates other participants.

    Participatory economics in its current form was fertilized in the late sixties and born in the early to late seventies. It gained clarity when Robin Hahnel and I set out our and other new leftists' reactions to various schools of anti-capitalist activism in various books and endeavors through the seventies and eighties. It became a well defined proposal by way of a book titled “Looking Forward” about 25 years ago. Hahnel and I, echoing many others, addressed economic vision with a commitment to classlessness plus four values that seemed to us likely to be very helpful in organizing and disciplining our thought: self management relating to decision making, equity relating to distribution of benefits and responsibilities, solidarity relating to people's connections, and diversity relating to a range of options. Hahnel and I used the four values, later adding ecological balance, plus a desire for classlessness and economic and social success, to orient ourselves as we sought to describe a worthy and workable economy beyond capitalism.

    People sometimes told us that we sounded like we were discussing an idea that existed only in people's minds. Other times, people said we sounded like we were talking about a system that already existed out in the world. This wasn't confusion, but accuracy.

    Parecon names a specific economic model, which is in turn a free creation of the mind that, however, aims to describe essential features of a future classless economy. It tries to specify how to have classlessness, self management, equity, solidarity, diversity, and ecological, economic, and social success.

    Parecon is, however, also an economy that will someday exist in which real workers and consumers will produce and consume real goods and services. That future parecon has properties like a place that we haven't yet visited. We think about it, guess the properties, and finally establish and thereby verify or alter them.

    Those who seek a participatory economy don't offer the intellectual model for entertainment, nor to exercise their minds. They are not seeking a resume of publications to get a job. They offer the model to aid seeking a new economy. They offer it to help overcome cynicism and to inform current efforts.

    Parecon's Defining Features and Breath of Variation

    The central features which parecon's advocates feel are the minimum required to have a classless, self managing, and otherwise successful participatory economy are:

    workers and consumers self managed councils
    remuneration for duration, intensity, and onerousness of socially valued labor
    balanced job complexes
    and participatory planning.
    These four institutions define participatory economics in the same way that private ownership, remuneration for property, power, and output, corporate divisions of labor, and market allocation define capitalism.

    We know capitalism can dramatically differ from one instance to the next and that the diversity of capitalisms is not due solely to countries having different populations, resources, levels of technology, history, or differences in other parts of social life. Additionally, countless variations in the implementation of capitalism's key economic features and in the implementation of endless second, third, and fourth order economic features distinguish one instance from others. And the same will hold for actual participatory economies.

    Thus, different instances of participatory economy could differ in how labor is measured, how jobs are balanced, how councils meet and make decisions, and the details of how participatory planning is carried out, much less, beyond that, in all manner of less central features.

    Indeed, it would be a debilitating mistake to seek an inflexible, unvarying, and comprehensive blueprint for a classless economy. People have often accused participatory economics of doing just that, even though it has never come remotely close to such a stance. Parecon is neither inflexible nor unvarying and it no more specifies the details of all future participatory economies, or even of one possible future participatory economy, than any broad description of capitalism's defining features tells us everything about the U.S., Sweden, Chile, or South Africa, much less about all of them.

    The participatory economic model describes central defining features that its advocates believe necessary to attaining classlessness and delivering self management, equity, solidarity, and diversity while successfully meeting needs and developing potentials in an ecologically and socially worthy way.

    The Logic underlying Parecon's Defining Features

    Self Managed Councils

    Why do advocates of participatory economics see self managed workers and consumers councils as essential for an economy to be classless?

    One of the pivotal tasks of defining a post capitalist economy is to establish within it appropriate decision making. For an economy to eliminate unjust distributions of power and influence, it must promote each worker and consumer participating with an appropriate level of influence in the decisions that affect their lives. If no person is to occupy a more privileged position than other people occupy, then each person must have the same broad relation to decision making as other people have.

    There are various ways to achieve that. For example, we could have every person get one vote in every decision. This would mean everyone is treated the same, however, many decisions have near zero impact on me. So why should I have the exact same say as people directly involved who are far more affected? On the other hand, regarding decisions where I am highly involved, why shouldn't I have more say than people who are only tangentially affected?

    Pursuing that simple insight while requiring that the same norm applies to everyone, yields the idea that every actor should have a say in economic decisions in proportion as he or she is affected by them. This, which advocates of participatory economics call self management, is a value. Having arrived at it, we can consider its implications and decide if it is fair and also conducive to best decisions.

    If we agree that workers and consumers should have an influence in outcomes proportionate to how they are affected by them, where are they going to exert this influence? It may be we lack imagination, but advocates of participatory economics have found it hard to conceive of any answer other than that workers and consumers will have to do so connecting with other workers and consumers, each participant acting sometimes singly and sometimes in concert with others, but with all participants being in position to use relevant information and exercise relevant confidence and decision making skills.

    Sometimes we will make decisions as individuals. Sometimes we will do it in small or large groups. We will have more or less say in decisions, either individually or in groups, depending on how potential outcomes affect us relative to how they affect others. Workers and consumers - as individuals, in little teams, in whole workplace or neighborhood councils, as well as in nested aggregates of councils - will express and manifest their preferences.

    That the venues for worker and consumer participation should be self-managing requires that they should utilize means of sharing information, discussing options, and tallying preferences, that give each worker and consumer a say proportionate to the degree they are affected. But it also requires that the workers and consumers be prepared to participate in the associated deliberations. Full discussions of the contours of self management would address how it might look in different contexts, including describing associated cases, methods, and so on, but the overall idea is simple.

    For some types of decision people determine that one person one vote and majority rule is best. For others, they perhaps require two-thirds, three quarters, or consensus. Sometimes preferences are expressed by one person, by a few people, or by all workers in a plant or by each or all consumers in some locale. Local decisions of course occur in context of system wide collective determination of economic inputs and outputs. Everyone has an appropriate say in all outcomes.

    The idea of workers and consumers councils has a long history in labor struggles and workplace activism and at times also in community organizing. That may be why parecon's advocates can't imagine anything but self managed workers and consumers councils as the main sites of economic decision making. Workers and consumers gravitate to this option themselves every time they undertake widespread resistance. Parecon's explicit clarification of self management as a decision-making norm is only a modest refinement that has long been implicit in popular inclinations. On the other hand, of course, some people doubt self managing councils, and we will consider their concerns later, but, for now, let's continue with our brief survey of the logic of parecon's defining features.

    Remuneration for Duration, Intensity, and Onerousness of Socially Valued Labor

    The next defining feature bears on actors claims on a share of the social product. What should govern what each person in a participatory economy receives as income? What logic reveals that parecon's proposal is essential for classlessness and viability?

    We need two things from a payment scheme or norm. On the one hand, it needs to apportion society's output in an ethically sound way. Everyone should get an amount that reflects appropriate moral commitments rather than violating them. Second, however, a payment scheme should also give people economically sensible incentives that ensure that society's productive potentials will be utilized to meet needs without incurring undue waste.

    The desire to be ethically sound is why parecon's workers receive more income for working longer, harder, or at more debilitating conditions and, likewise, why parecon does not give more income for someone having more power, owning property, being in an industry making something more valuable, or having highly productive workmates, better tools, or more productive innate talents to work with.

    With parecon's equity norm in place, we all earn at the same rate. We all earn with the same prospects. We don't exploit one another. No one can earn too much beyond others, because no one can work too much longer or too much harder than others. And when someone does earn more than someone else, it is only for reasons which everyone agrees are warranted.

    Of course, a full discussion addresses finer points to reveal ultimate viability, worthiness, and texture, but the essence is that rather than remunerating property, power, or even output, each of which leads to huge disparities of income and wealth that don't have any moral basis other than one or another form of elitism and that incur huge debits in the form of poverty, ensuing defense of wealth, and so on - parecon opts to remunerate only how hard and how long we work and the discomfort we endure at work. Participatory economics claims that this respects the effort we are contributing as well as any hardship we are enduring to create socially valued output.

    The incentive part of the remuneration task, that it should get needed work accomplished without undue waste, is what makes parecon declare that work that receives income must be socially valuable. If I seek income for the hours that I spend composing music, building houses, playing shortstop for a ball team (or digging holes and filling them), I won't be convincing because I cannot do any of those things well enough to warrant my using associated resources. Such work, done by me, will not be socially valued because I am unable to do those jobs socially usefully. I don't have those capacities.

    If I say, instead, pay me for the hours I spend producing bicycles or medicine, or maybe even writing social commentary, and if it is a product that society wants and that I am capable of usefully producing, then I can receive income at the standard rate for my effort, but I can't just stand around and say, hey, I worked, pay me. I have to generate output commensurate to the time I claim to have spent. I don't get paid for the value of the output that I generate, but along with my council mates, my work does have to generate valued output if it is to count as being worthy of remuneration for its duration, intensity, and onerousness.

    The incentive effect of this participatory economic approach to remuneration is precisely what it ought to be. I have an incentive to work well, hard, and when necessary enduring discomfort doing socially useful things. I am not pushed or compelled, however, to work longer or harder or at worse conditions than my well being or society's benefit call for, both for work and for consumption. And in all this I am treated precisely like everyone else.

    Here is a revealing way to think of it that another parecon advocate, Peter Bohmer, often emphasizes. Imagine your work/income plus leisure time off work as a kind of bundle that has various overall effects on you and others. Everyone who can work has such a bundle including their leisure and their work/income. Participatory economics says the overall worth of each bundle for each worker should be the same as the overall worth of other bundles for other workers. What we equilibrate is the sum of the value of the work plus the value of the income received for it and the value of leisure.

    To clarify further, imagine everyone works equally long, equally hard, and under the same conditions - and we all make good use of our time and of the resources we use so that others benefit sufficiently to warrant our activity. Surely that is an equitable arrangement. Okay, but now assume you are called upon by circumstances (or your preferences) to work somewhat longer, or harder, or to endure somewhat worse conditions. Why would you agree to that? For it to be fair, you would do so for more income that offsets the outlay of additional time at work. Or, suppose instead you would like to work less hours, or less hard, or under better conditions. Why should society - or your workmates - say, okay, sure, go ahead, take more leisure? Answer, because you will take less income, and your income/work bundle will remain equitable along with everyone else's.

    Without getting too detailed in this presentation parecon advocates claim that remunerating for duration, intensity, and onerousness of socially valued labor is necessary for classlessness because it is hard to see how one can generate equity as well as proper incentives and useful outputs with some other approach. Familiar options, such as remunerating power, property, and/or output, yield huge income differentials, harsh struggle over spoils, and perverse anti social incentives. Alternatively, letting everyone do whatever work they want, in any amount they choose, and then consume however much they like risks demand for outputs swamping offerings of labor, but, in any case, certainly generates neither clear incentives nor needed clarity about preferences, a point we will return to later when considering reasons some people doubt parecon's virtues or possibility.

    In sum, regarding ethics, a pareconist argues that duration, intensity, and onerousness of work each morally deserve to be remunerated. They are what the worker contributes to the social product. If there is some other approach that is also ethical, okay, but we don't see what it is. And it certainly isn't, it seems to us, remunerating power, property, or being better endowed or working on something deemed more valuable (save for requiring it be valuable at all), or having better tools, or even having innate talents.

    Regarding incentives, a pareconist argues that duration, intensity, and onerousness are the attributes that incentives can draw forth and that are needed by society, at least up to a point. Incentives to cheat, steal, oppress, and pollute, on the other hand, are not needed by society. They create conditions that only benefit those who accumulate profits, at vast cost to others.

    Regarding outcomes, to ensure that what is produced makes economic sense, we of course want work to be socially desired and efficient. To remunerate for that which isn't beneficial to others sufficiently to justify expending the resources used in its creation would violate good sense and reduce overall benefit. To reward property, power, or even output, or to ignore that the effort expended needs to be socially useful would deviate from both equity and efficiency, which is why parecon chooses its particular remunerative approach.

    Balanced Job Complexes

    The third defining feature of participatory economics is balanced job complexes. Each worker does a mix of tasks such that the total of their work responsibilities has comparable empowerment implications for them as the mix of every other worker's task has for all others. Parecon claims that classlessness and self management can't do without this type balancing. What logic leads to this claim?

    First, balanced job complexes are not about fairness of circumstances. If someone enjoys better or worse conditions, the remunerative approach would generate fairness by properly compensating for the difference. Balanced job complexes are instead about class division and class rule.

    We want classlessness and by definition that means we don't want our economic institutions to systematically give some citizens more power which they are motivated to use to accumulate for themselves excessive wealth and better conditions.

    We know that if we let people own means of production and determine its use, their view of others and their overall motives will be skewed so they will dominate outcomes and accumulate extreme wealth. For that reason we reject having owners as a class above workers.

    But it also turns out that if some people do only disempowering labor while other people do only empowering labor, the former traditional workers will be dominated by the latter "coordinator class." Managers, lawyers, doctors, accountants, and others doing empowering tasks within a corporate division of labor, will, by virtue of the confidence, knowledge, access to levers of decision making, self interests, self image, image of others, and motivations that their empowering position gives them, rule over workers - who, in turn, by virtue of their disempowering position will lack assets facilitating decision making and even appear incapable of conceptual participation.

    The logic of seeking balanced job complexes stems from these observations because if we reject having some people monopolize empowering conditions and roles and thereby becoming a separate "coordinator class" above workers, then we require a division of labor that doesn't give only some people empowering and most people disempowering work. That's what seems to advocates of participatory economics an inescapable conclusion which in turn requires us to structurally eliminate a class-divided distribution of tasks.

    With balanced job complexes we still welcome expertise since expertise is essential for socially valued work, but each worker does a mix of tasks - not solely rote or solely empowering - and not solely expert or solely mundane - so that everyone is comparably and sufficiently prepared by their economic position to participate in self managing councils. Parecon has a division of labor such that all workers have a mix of tasks which, taken together, comparably empower them. This prevents having a division of labor that establishes a coordinator class dominating a working class.

    The corporate division of labor is familiar from capitalism but also from what has been called twentieth century socialism. It has unbalanced job complexes in which about 20 percent of the workforce does virtually all the empowering tasks while the rest of the workforce does overwhelmingly disempowering tasks. The former group continually acquires and reacquires the needed confidence, information, skills, and even energy to make decisions. The latter group instead accrues mainly exhaustion. And this difference is built into the corporate division of labor which literally imposes the results, just as having private ownership imposes that capitalists dominate economic outcomes. That is, just as capitalists monopolizing ownership gives them vast power and antisocial but self serving aims, so too the coordinator class monopolizing empowering work gives them vast power (especially when there are no capitalists above) and antisocial but self serving aims.

    Take over a factory and proclaim a desire to make it equitable, just, and humane. It doesn't matter how sincere you are, if you retain private ownership you will fail because its presence will subvert your efforts. Similarly, even if you eliminate private ownership so that there are no more capitalists, if you retain a corporate division of labor again you will fail no matter how hard you try. The corporate division of labor will subvert your efforts. These observations are borne out by even the most rudimentary and common knowledge of people and institutions, but also by countless historical examples of both types of failure.

    It therefore turns out that having balanced job complexes is not a luxury or a peripheral feature of participatory economics but is, instead, at the core of attaining classlessness. The participatory economic perspective is that having a corporate division of labor will subvert efforts to attain classlessness, self management, and equity, but having balanced job complexes will advance those desirable aims. For completeness, parecon advocates name the class that monopolizes empowering work the coordinator class and name the economic systems that elevate that class to ruling status (due to failing to eliminate a corporate division of labor) coordinatorism. Two types of economies that have precisely this attribute, and which parecon advocates thus call coordinatorist, are what has heretofore gone under the label market socialism and centrally planned socialism.

    Participatory Planning

    As the fourth and last defining feature of participatory economics, why must an economy have participatory planning to be classless, self managing, etc.? Wouldn't it be easier to stick with markets wherein disparate separate actors compete, or to opt for central planning undertaken from the top down? What logic requires a new type of allocation?

    Advocates of participatory economics of course freely admit that it would indeed be easier to stick with markets or central planning than to adopt a new allocation system, but they also emphasize that retaining these old allocation systems would be suicidal for attaining classlessness, much less attaining full self management, equity, etc.

    Participatory economy advocates claim that both markets and central planning have intrinsic flaws which would horribly distort economic choices of what to produce and consume, and that, even beyond that, they have intrinsic dynamics that compel workers and consumers to make choices contrary to maintaining self management, solidarity, equity, classlessness, ecological stewardship, etc. We have no choice, therefore. If we want classlessness, we must opt for a new approach to allocation.

    Though of course such discussions require much more to be complete, in brief, by its very definition, central planning gives excessive influence to planners and diminished influence to others. Planners, in turn, need loyal allies inside workplace to enforce the planners' instructions and also to gather information that planners need. When the dust settles we are back to a distinction between empowered and disempowered producers - which is to say, coordinator class members and workers.

    Further, once they have ruling class status, planners and all their fellow empowered coordinator class members bend decisions to primarily advance their own interests, albeit in the name of system preservation. This is predictable based on understanding people, systems, etc., but it is also borne out by the history of twentieth century socialism - which removed owners but established coordinator class rule in its place.

    With markets the story is similar though the details are very different. But where central planning can arguably at least in theory arrive at reasonably accurate valuations of economic products and processes, markets cannot achieve even that because they intrinsically mis-specify prices regarding public and social goods, ecological impact, etc. Markets also enforce that actors behave egocentrically, even narcissistically, and react on a very short timeline. We have no option but to make choices with no concern for nor even any knowledge of implications for others around us, much less for others who are geographically distant or who live in the future. Indeed, with markets, solidarity is punished, greed rewarded.

    Likewise, but less well understood, markets induce class rule. In the rush to capture market share and to avoid being outcompeted it is necessary to cut costs. After a point, this can only be done at the expense of workers and consumers. To carry it out requires decision makers who are callous to broad social needs and insulated against the losses that cutting costs at the expense of workers and consumers imposes. This is the coordinator class which is employed by firms to ensure surpluses even against workers' desires for self management.

    Each criticism raised above, much less all of them, albeit more fully elaborated, provides reason to be a market abolitionist and to also join the generalized chorus against central planning. But while exploring the above points would demonstrate beyond any doubt that it would be wonderful to have a new allocation system that did not generate class division and that was able to properly value individual, social, and ecological effects, and to have an allocation system that produced by its dynamics solidarity rather than anti-sociality and diversity rather than homogeneity, why should we adopt, in particular, participatory planning? Why does that fulfill our agenda? What if adopting participatory planning would not ensure the sought gains but would instead take the economy from the frying pan into the fire?

    As with earlier defining features, the underlying argument for participatory planning is not complex. We want social behavior not anti social behavior. We want informed participation with appropriate levels of say, not authoritarian hierarchies. We want true social costs and benefits that are intelligently and freely taken account of in decisions, not false costs and benefits self interestedly manipulated and exploited.

    These desires require that those affected by decisions cooperatively negotiate outcomes and even just this impetus alone is pretty much sufficient, I suspect, to narrow our allocation search to participatory planning as outlined in models of parecon, or to something very much like it, at any rate.

    In participatory planning, workers and consumers can freely express their preferences and this can't possibly be avoided if we want self management. In doing so, workers and consumers have time, information, and motivation to take into account what others express and to modulate their own choices accordingly, in a back and forth dynamic. Once someone thinking about the allocation problem has that much in mind, the rest is essentially driven by the constraints of having accurate valuations as well as appropriate say for all actors. That's how Hahnel and I drew out the contours, at any rate, adding into a collective and cooperative negotiation process undertaken by councils steps and facilitating structures as they were needed to make the operations both worthy and viable including being able to deal with unexpected shocks, changes in tastes, innovative discoveries, etc.

    Participatory planning is, therefore, just an institutional expression of the long-term anarchist, decentralized socialist, and even religious injunction that workers and consumers should decide production and consumption themselves in accord with their needs and desires and not compelled by the choices imposed by some narrow elite or ruling class, albeit with parecon's specific conception of self management appended.

    But of course a critic might say that it sounds glorious except the whole system would implode for various reasons. And, indeed, there are also those who would say self management is nonsense because it will yield dumb outcomes by under utilizing expertise, or who would say equitable remuneration is nonsense because there will be insufficient incentives for being a doctor or other roles that require lots of training and are highly productive, or that balanced job complexes are nonsense because it is too clumsy and in any event most people can't do what it requires, or finally that participatory planning is nonsense because it would fall flat, not in its values, but in its implementation. This is all fair enough, because if any of that is true, parecon would be flawed and would need renovation or, if renovation proved impossible, to be jettisoned entirely. None of that, even if true, however, would warrant turning back toward markets or corporate divisions of labor, or private ownership, as compared to still seeking a workable vision beyond all these.

    Parecon's Current Status

    We will take up the above concerns, and others as well, in part two of this survey, but even before that, we might now ask, why should anyone take seriously even just the possibility that the four defining features of parecon might be desirable in any event? If they were desirable if implementable, for example, shouldn't many more people be discussing, debating, and advocating participatory economics, or trying to determine its viability? If parecon would be worthy if it proved possible, why aren't there more reviews, essays, and support as well as criticism?

    Participatory economics, like all any conceptual models when first presented, was initially utterly invisible. Twenty five years later, however, it is still shrouded, let's say, at least on a grand scale. It struggles up from under the curtains of silence every so often, but then it falls back beneath. Even if we consider only anti capitalists, though things have begun changing as steadily more anti capitalists have come into contact with parecon and begun to assess it for themselves, and as Latin American and European events, and even polls, votes, and evident frustrations in the U.S. and UK have begun to put the issue of what we want forefront - still progress is very slow, and visible discussion is almost non existent. But why has this process taken so long, and why, even now, is there noticeably little print discussion of this vision even while growing numbers of activists at the grassroots are starting to take parecon seriously?

    One possible answer, benign and without broader implications, is just that new ideas and formulations often require a lot of time to percolate into view, and even more time to get serious public assessment. I think this is certainly part of the story. But I also think it isn't the whole of the story.

    Why, for example, haven't there been more major reviews and essays about parecon, either highly critical, or gently or aggressively supportive? I think there are two parts to the answer beyond just noting that such things take time.

    The first part is that there is relatively little written, whether as a review or otherwise, about any economic vision at all. It isn't just participatory economics that goes under-discussed (even in alternative media), but other economic visions as well (and, really, any kind of vision at all).

    Make some new claim about how capitalism works, or racism, or whatever, in the world we daily endure, and it will be dissected ad nauseam, especially if people have a way to disagree with it. Make some claim about what should replace capitalism, racism, or whatever, however, and there will very likely be a crescendo of silence. This is true regardless of what visionary claims are made.

    But while non-specific vision aversion explains a long, slow haul for any visionary claims, I think a second part of the answer in the case of participatory economics, is that parecon has attributes that orient people who run progressive publications, radio shows, and organizations away from giving it even modest visibility. That is, if parecon becomes widely advocated on the left, there will arise pressure for changes in left institutions to move them in pareconish directions and many people sincerely feel that such changes would be destructive, or sometimes even just oppose them to protect their own continued roles.

    There is a loose but instructive analogy to the rise of feminism or black power decades back. As those broad perspectives gained strength there arose great pressures to reduce racism and sexism in left movements and projects and to actively propel cultural diversity and feminism. There also arose considerable resistance to these changes, not least from people who saw them as threatening their own situations. I think the same holds for participatory economics.

    Those who own or who administer left projects, publications, and movements, either implicitly or explicitly realize at some level that if pareconish economic views became preponderant their current agendas for left efforts would be disrupted by a drive toward equity, self management, and particularly balanced job complexes within their own projects and organizations. Whether they resist this type of change to avoid loss of position or because they sincerely think it would be harmful varies case by case.

    There was a time when a periodical that didn't have reviews of participatory economics, or any kind of visibility for parecon at all, could legitimately claim it was because parecon was a sidebar set of notions, without much support, and because the periodical hadn't, in fact, received any writing about parecon. Their not soliciting writing would hardly demonstrate active resistance but, instead, just a common disposition away from vision in any form, or even just honest ignorance of parecon's existence, or sincere doubt about its worth. But nowadays at least a good number of left periodicals have received many submissions and actively rejected or more often ignored all of them including from well known writers and even people on their own staffs. I think that suggests a different dynamic than benign neglect.

    In any event, whatever the causes may be, the relative absence of people seriously debating parecon's merits in diverse print venues greatly hinders its spread. A potential reader could reasonably think to him or herself, should I wade through this book, or even just this article? Should I immerse myself in this website? Should I work to understand these ideas? Should I develop my own views about them? Well, wait, perhaps I shouldn't do any of that. After all, my favorite journals haven't said a word about this vision. So I should probably ignore it to and wait and see if parecon gains credibility before I invest any of my own very limited time assessing it.

    This kind of reader reticence to take parecon - or anything else largely ignored by left media - seriously by giving it some time and attention, given the absence of serious print debate about it, is quite reasonable for each individual. I make calculations of the same sort, often, about stories and investigations that are not up my ally. If they can't get attention from others who are closer to the topic, I can't see giving my time to them. But writ large - with vision per se, and with parecon, this is not so reasonable if, as I believe, the silence among media outlets isn't itself reasonable.

    In any event, this kind of dynamic has operated at least for over a decade. The rise in the numbers of people relating to parecon despite the absence of print discussion and debate is arguably remarkably quick, rather than slow, once seen in that context. But whether it is slow or fast, we can now at least hope, with some reason, that the attention parecon is getting is reaching a scale that will propel collective adjudication of the merits of the model. And perhaps that will be helped along by this three part offering.

  • Boulder Dash 17th Mar 2019

    “We want true social costs and benefits that are intelligently and freely taken account of in decisions, not false costs and benefits self interestedly manipulated and exploited.”

    Ok, rather than fuvking robots that can play the stock market or manipulate shit somehow for their own benefit, whatever that may be, why couldn’t people innovate an AI system that was able to intelligently ascertain the true social costs and benefits of production? Huh? Then develop some nifty algorithms that could deal with the iteration process? Huh? Why not?

    Well first, because no one’s even bothering to imagine that a better, fairer, just and equitable economic system is possible.

    Too hard. Markets for allocation are just easier dude. Yeah there’s some bad things about ‘me but fuck, participatory planning man, that just sounds too hard man...come on, be realistic. Just too hard...not as hard as a superintelligent artificial general intelligence machine by 2029 dude...woohoo! That’s sexy as shit man.

    • Boulder Dash 17th Mar 2019

      That’s meant to be “ there are some bad things about ‘em...”...not me.

    • Irie Zen 19th Mar 2019

      [..]

  • Boulder Dash 18th Mar 2019

    Science/Technology

    Defining science

    LIKE EVERY LABEL for a complex personal and social practice, the word science is fuzzy at its edges, making it hard for us to pin down what is and what isn’t science. Nonetheless, for our broad purposes, we can assert that science refers to an accumulated body of information about the components of the cosmos, to testable claims or theories about how these components interact, as well as to the processes by which we add to our information, claims, and theories, reject them as false, or determine that they are possibly or even likely true.

    My personal knowledge that the grass I see from my window is green is not science, nor is my knowledge that my back was hurting an hour ago, or that my pet parrot Zeke is on my shoulder. Experiences per se are not science, nor are perceptions, though both can be valid and important.
    It isn’t by way of science that we know what love is or that we are experiencing pain or pleasure. It isn’t science that tells a Little Leaguer how to get under a fly ball to catch it. Science doesn’t teach us how to talk or what to say in most situations, nor how to add or multiply numbers.

    Most of life, in fact, including even most information discovery and com- munication, occurs without doing science, being ratified by science, or denying, defying, crucifying, or deifying science.

    And yet, most knowing and thinking, and especially most predicting or explaining is much like science, even if it is not science per se.
    What distinguishes what we do every day from what we call science is more a difference of degree than a difference of kind. Perceiving is perceiving. Claiming is claiming. Respecting evidence is respecting evidence. What distinguishes scientists doing these things in labs and libraries from Mr. Jones doing these things to choose the day’s outfit and stroll into town is science’s personal and collective discipline.

    Science doesn’t add new claims about the properties of realities’ components to its piles of information and its theories, nor does it assert the truth or falsity of any part of that pile, without diverse groups of people reproducing support- ing evidence and verifying logical claims under very exacting conditions of careful collection, categorization, and calculation. Nor does science advance without reasons to believe that what is added to the scientific pile has signifi- cant implications vis à vis the pile’s overall character, history, and development.

    As Einstein taught, and as is generally agreed, what makes a theory more impressive is greater simplicity of premises, more different kinds of things explained, and its range of applicability. What is most happily added to science’s knowledge pile are checkable evidence, or testable claims, or new paths connecting disparate parts that verify or refute previously in doubt parts of the pile, or that add new non-redundant terrain to the pile, in turn giving hope of providing new vistas for further exploration.

    If we look in the sky and say, hey, the moon circles the earth, it is an observation, yes, but it is not yet science. If we detail the motions of the moon and provide strong evidence for our claims about its circling the earth that is reproducible and testable by others, we are getting close to serious science, or even contributing to it. If we pose a theory about what is happening with the moon, and we then test our theory’s predictions to see if they are ever falsified or especially if they predict new outcomes that are surprising to us, then we are certainly doing science.

    Webster’s Dictionary defines science as “the observation, identification, descrip- tion, experimental investigation, and theoretical explanation of natural phenomena”.

    The Concise Oxford Dictionary defines science as “the intellectual and practical activity encompassing the systematic study of the structure and behaviour of the physical and natural world through observation and experiment”.

    Seventy-two Nobel Laureates agreed on the following definition: “Science is devoted to formulating and testing naturalistic explanations for natural phenomena. It is a process for systematically collecting and recording data about the physical world, then categorizing and studying the collected data in an effort to infer the principles of nature that best explain the observed phenomena.”

    And Richard Feynman, one of the foremost physicists of the twentieth century, pithily sums up the whole picture: “During the Middle Ages there were all kinds of crazy ideas, such as that a piece of rhinoceros horn would increase potency. Then a method was discovered for separating the ideas – which was to try one to see if it worked, and if it didn’t work, to eliminate it. This method became organized, of course, into science.”

    Science motives

    We can say with confidence that the type of economy a society has can affect science by affecting the information that is collected and the claims about it that are explored, the means and procedures utilized in the collection and explora- tion, and who is in position to participate in these processes or, for that matter, even to know about and be enlightened by science’s accomplishments.

    There are at least two individual and two social motives that propel science.

    First there is pure curiosity, the human predilection to ask questions and seek their answers.

    Why is the sky blue? What happens if you run at the speed of light next to a burst of light? What is time and why does it seem to go only one way? What is the smallest piece of matter and tiniest conveyor of force? How do pieces of matter and conveyors of force operate? What is the universe, its shape, its development? What is life, a species, an organism? How do species form, persist, get replaced? Why is there sex? Where did people come from? How do people get born, learn to dance, romance, try to be a success? What is a language and how do people know languages and use them? What is consciousness? When people socialize, what is an economy, how does it work, and what is a polity, culture, family, and how do they work?

    Inquiring minds passionately want to know these things even if there is nothing material to be gained from that knowledge, rather like someone passion- ately wanting to dance even if no one is watching, or someone passionately wanting to draw even if no one will put the results on a wall.

    A second personal motive for science is individual or collective self-interest. Knowledge of the components of reality and their interconnections sufficient to predict outcomes and even to affect outcomes, cannot only assuage our curiosity, it can increase the longevity, scope, range, and quality of our lives.
    What is the cause and cure for polio or cancer? How do birds fly? How does gravity work? Curiosity causes us to open the door to the unknown with gigantic desire and energy; but we drive whole huge caravans through the doors of science, in part because of the benefits we gain.

    The benefits can come from the implications of the knowledge itself, but also from remuneration for scientific labors or achievements. There can be material rewards for gathering information and for proposing or testing hypotheses about reality. Pursuit of these rewards is also a motive for doing science.
    Likewise, the benefits to be had beyond the satisfaction of fulfilling one’s curiosity are not confined to material payment. One can attain status or fame, and doing science is often at least in part driven by pursuit of the social prizes, notoriety, stature, and admiration that accompany discovery.

    Science and economics

    An economy can plausibly increase or diminish people’s curiosity, or just push it in one direction or another. It can affect as well the ways that scientific know- ledge can directly benefit people, and, of course, it can affect the remuneration and other material rewards bestowed on people for doing science as well as the social rewards they garner.

    We can see all of this in history too. For a long time science as we define it did not even exist. There was mysticism and belief, sometimes approximating truth and sometimes not, but there wasn’t an accumulation of evidence tested against experience and guided by logical consistency.

    Later, societies and economies propelled science and oriented it in various ways. At present, tremendous pressures from society, and particularly from capitalist economy, both propel and also limit the types of questions science pursues, the tools science utilizes, the people who participate in science, and the people who benefit from or even know of science’s results.

    In the U.S. science has become ubiquitous, revealing the inner secrets of materials, space, time, bodies, and even, to a very limited extent as yet, minds.

    But science has also become, in various degrees and respects, an agent of capital. Distortion arises when the different methods and problems scientists utilize are biased by motives other than scientific inquiry undertaken for its own sake.

    British journalist George Monbiot reports that “34% of the lead authors of articles in scientific journals are compromised by their sources of funding, only 16% of scientific journals have a policy on conflicts of interest, and only 0.5% of the papers published have authors who disclose such conflicts”.
    In the pharmaceutical industry, circumstances are arguably worst, in that we find that “87% of the scientists writing clinical guidelines have financial ties to drug companies”.

    More subtly, commercial funding and ownership affect what questions are raised and what projects are pursued. If patent prospects are good, money flows. If they are bad, even when reasons of general curiosity or improving human welfare warrant a line of inquiry, funding is hard to come by.
    At the most horrific extreme, citizens may wind up “guinea pigs as in the Tuskegee Syphilis Experiment between 1932 and 1972, or in experiments between 1950 and 1969 in which the government tested drugs, chemical, biological, and radioactive materials on unsuspecting U.S. citizens; or [as in] the deliberate contamination of 8000 square miles around Hanford, Washington, to assess the effects of dispersed plutonium”. On a larger scale, in the U.S. the Pentagon now controls about half the annual $75 billion federal research and development budget, with obvious repercussions for the militarization of priorities.

    I recently sat on an airplane next to an MIT biologist interested in human biological functions and dysfunctions. He was not at all political or ideological, but he had no confusion about the way things work. “What we do, what we can do, even what we can think of doing,” he told me, “is overwhelmingly biased by the need for funding which, nowadays, means the need for corporate funding or, if government, then a government that is beholden overwhelmingly, again, to corporations or to militarism. More, the corporations plan on a very short time horizon. If you can’t make a very strong case for short run profits, forget about it. Find something else to pursue, unless, of course, you can convince the government your efforts will increase killing capacities.” My travel neighbor’s attitude shows the deadly combination of market competition and profit seeking plus militarist governments at work (and anecdotally reveals as well, that every- one knows what’s going on).

    Parecon and science

    What would be different about science in a parecon? Four primary structural things would change, which in turn have a multitude of implications.

    Each parecon scientist will work at a balanced job complex, rather than occupying a higher or lower position in a pecking order of power.

    Each parecon scientist will be remunerated for the duration, intensity, and, to the extent relevant, harshness of their work, not for power or output, much less for property.

    Each parecon scientist, with other workers in his or her scientific institution – whether it’s a lab, university, research center, or other venue – will influence decisions in proportion as he or she is affected by them.

    The level of resources that parecon’s scientists are allotted to engage in their pursuits will be determined by the overall economic system via participatory planning, again with self management.

    As a result pareconish science will no longer be a handmaiden to power and wealth on the one hand – indeed these won’t even exist in centralized forms – nor will those involved in scientific pursuits earn more or less remuneration or enjoy more or less power than those involved in other pursuits.

    A scientist who makes great discoveries within a parecon will no doubt enjoy social adulation and personal fulfillment for the achievement, but will not thereby enjoy a higher level of consumption or greater voting rights than others. Likewise, a scientific field will not be funded on grounds of benefiting elites as compared to advancing human insights for all.

    Will there be huge expenditures on tools for advancing our knowledge of the
    fifteenth decimal point of nuclear interactions or the fourteen billionth light year distant galaxy even before we have figured out how to reduce the hardships of mining coal or containing or reversing its impact on the ecology, or before we develop alternative energy sources?

    Will research be undertaken on grounds of military applications instead of on grounds of implications for knowing our place in a complex universe?

    These are questions that will arise and be answered only when we have a new society. What parecon tells us is the broad procedure, not the specific outcomes that people will choose, though we can certainly make intelligent guesses about the latter, too.

    When the latest and greatest particle accelerator project was being debated in the U.S., a congressman asked a noted scientist who was arguing for allocating funds to the super collider, what its military benefits would be. The scientist replied it would have no implications for weaponry, but it would help make our society one worth defending. The scientist’s motivations and perceptions failed to impress the Congress, which voted against the project.

    Do we know that in a parecon the participatory planning system would have allotted the billions required? No. We don’t know one way or the other. But we do know that the final decision would be based not on the project’s military benefits, but rather on how the project would contribute to making society a more desirable and wiser place.

    So parecon in no way inhibits scientific impulses. Instead it is likely to enhance them greatly, through an educational system that will seek full participation and creativity from everyone, and because parecon will allot to science what a free and highly informed populace agrees to. Science, in the sense of creatively expanding the range and depth of our comprehension of the world, depends on real freedom, which is to say real control over our lives to pursue what we desire – which is what parecon provides.

    Technology

    Technology is similar to science in its means of pursuit and logic of develop- ment. Those who work to produce technology or applied science in a parecon will have the same influence, conditions, and income as those who do other endeavors. The critical difference will be how society decides which technologies are worth pursuing.

    Capitalism pursues technologies when they can yield a profit or help elites maintain or enlarge their relative advantages. As a result, capitalist technological innovations reflect the priorities of narrow sectors of the population, not generalized human well-being and development.

    In the U.S., for example, technological nightmares abound. Indeed, the whole idea of high tech and low tech is revealing. Something is high tech if it involves huge apparatuses and massive outlays of time and energy, thus generating many opportunities to profit. Something is low tech if it is simple, clean, and comprehensible, and generates fewer possibilities for profit. Why can’t we change the standards so something is considered high tech if it greatly enhances human well being and development, and something is considered low tech if it tends toward the opposite effect?
    Smart bombs, in their deadly majesty, are now considered the highest of high tech. The sewage system, mundane and familiar, is considered low tech, at best. Yet the former only kills and the latter only saves.

    The pursuit of new drugs with dubious or even no serious health benefits is considered high tech. Working to get hospitals cleaner and bug free is considered low tech – relying largely on medical hygiene norms. The former helps the rich and powerful accrue more wealth. The latter would help all of society accrue longevity and a better quality of life, but might actually diminish profits. Capitalism celebrates the former and prevents the latter.
    In the U.S., the pursuit of industrial technology is overwhelmingly about profits.

    This has diverse implications. U.S. technology seeks innovation to lower market-determined costs, which in any event ignore the adverse effects of production on environment and workers. Thus technologies that use fewer inputs at lower costs are sought, but technologies that spew less pollution or impose less stress on workers are not sought unless owners are forced by social movements to pursue them.

    U.S. technology seeks to increase market share by convincing audiences to buy products regardless of the value of the innovation or its social cost in byproducts. Gargantuan resources and human capacities go into designing packaging and producing advertising, often for entirely interchangeable and
    utterly redundant or even harmful products. Everyone knows this. Within our system, it is just another nauseating fact of life.

    U.S. technology likewise seeks to increase coordinator class and capitalist domination of workplace norms by imposing divisive control and fragmenta- tion, regardless of the harsh implications for subordinated workers. The point is that under capitalism there won’t be funds to research new workplace organ- ization and design aimed at workplace well-being and dignity. There will be no effort to enhance the knowledge and power of workers, but exactly the opposite.

    U.S. technology also seeks to ward off avenues of innovation that would diminish profit making possibilities for the already rich, even at the expense of lost public and social well-being for the rest of society. Don’t even think about replacing oil as our main source of fuel as long as there are profits to be extracted from its use. The economy will rebel against serious pursuit of wind, water, geothermal, and other approaches that would decentralize control and diminish specialization that benefits elite sectors, and that would challenge the current agendas of major centers of power.

    U.S. technology also seeks to implement the will of geopolitical war-makers by providing smarter bombs, bigger bombs, deadlier bombs, and vehicles to deliver them. So if you are a young potential innovator, there will be enormous pressure on you to study certain disciplines, develop certain skills, and nurture certain aspects of your personality, if you want to make it. And then once you have accumulated these talents, there will be enormous pressure to utilize them. It is even evident throughout popular culture just how much this is all taken for granted. The only thing people doubt is that there is any alternative.

    Economics and technology

    As historian and philosopher David Noble urged in an interview with The Chronicle of Higher Education, “No one is proposing to ignore technology altogether. It’s an absurd proposition. Human beings are born naked; we cannot survive without our inventions. But beneficial use demands widespread and sustained deliberation. The first step toward the wise use of our inventions would be to create a social space where these can be soberly examined”.

    Additionally, this space has not only to prepare people to soberly examine options and welcome them doing so, it has to remove the incentives and pressures that prevent people applying as their norms those that support human well-being and development. Does parecon do all that and therefore contribute to desirable technological development?

    Imagine a coal mine, a hospital, and a book publishing house in a society with a participatory economy. Inside each there are people concerned with evaluating work conditions and proposing possible investments to alter production relations and possibilities. These are not being done in pursuit of greater profit – a goal that doesn’t exist in a parecon – but in pursuit of more efficient utilization of human and material inputs to provide greater fulfillment and development among those who both consume and produce workplace outputs.

    The coal mine has a proposal for a new technique, made possible via new scientific or technical insights, that would ease the difficulty of coal mining and increase its safety, or, if you want, that would reduce the pollution effects of coal mining.

    The hospital has a proposal for developing a new machine that would make healing more effective in certain cases, or one that would make certain hospital tasks easier.

    The book publishing house has a proposal for a technological change or new equipment that would make the work of preparing books a bit easier.
    And let’s add two more proposed innovations, as well: a social investment that would allocate resources to some military experiments and the implementation of a new weapons system on the one hand, and, on the other, the allocation of resources to an innovative set of machines and work arrangements that would produce quality housing at low cost with reduced environmental degradation.

    What are the differences between how a capitalist economy and capitalist workplaces and consumers address these possibilities, and how a participatory economy with pareconish workplaces and consumers addresses these possibilities?

    In capitalism, as we have seen, various affected parties will weigh in on the choice, to the degree they even know the decision is being made. Capitalists and coordinators will be privy, and will have access to the levers of power. They will consider immediate implications for themselves – largely in terms of profit possibilities but partly, particularly for the coordinators, in terms of implications for their conditions and status. They may also consider longer run implications of their decision for the overall balance of class and social forces.
    Innovations bettering the situation of workers or even consumers will be rejected unless, and to the extent, they are also profitable for owners and to the degree the more general benefits don’t raise profitability problems. Technical innovations will be appreciated for lowering costs incurred by the owners – perhaps by dumping costs on others – and for increasing control and sub- ordination on behalf of the lasting preservation of favorable balances of power.
    In the capitalist workplace, in fact, innovations that cost more and generate less gain in output, but that provide greater control from above, will often bepreferred over innovations that yield more output per asset, but empower workers. The reason is that in the latter case the gains may ultimately be distri- buted, due to workers’ increased bargaining power, such that the overall result for owners is a loss rather than a gain, even though the result for productivity as a whole is positive.

    Or take another case. Why is there such a disproportionately large allocation of social resources to military expenditure and research in the U.S., as compared to what is spent on health care, low income housing, roads, parks, and education? Diverse explanations are offered for this bias. Some say it is because military expenditures provide more jobs than social expenditures, and therefore are better for the economy. But this is clearly wrong; in fact, the reverse is over- whelmingly the case. The technology-laden production of bombs and planes and associated research requires only a fraction of the labor per dollar invested that producing schools and hospitals requires.

    Others say it is because of the massive profits that accrue to aerospace and other militarily involved industries, which obviously lobby hard for government support. But this too is false. The same, or indeed equally large other industries, would make the same kind of profits if expenditures went to housing, road repair, and other infrastructural work undertaken to fulfill government contracts. It is highly interesting that in the aftermath of obliterating the social structure of Iraq, there is a tremendous flurry of interest among multinationals to rebuild that country, yet there is no similar flurry to rebuild the inner cities of the U.S. itself. What makes blowing up societies, or even just stockpiling the means to do so, or reconstructing societies other than our own – at least up to a point – more attractive as a path of major social commitment than reconstructing and/or otherwise greatly improving the social conditions of poor and working class communities throughout the U.S.?

    The answer is not short-run profits. These can be had in all the competing pursuits. The same companies or equally large ones could make huge profits building schools, roads, and hospitals in cities throughout the U.S., just as in Iraq.
    What causes the military investment to be preferable to the social investment isn’t that it is more profitable, or that it employs more people – both of which are false – but that the product of military investment is less problematic. While social investment betters the conditions, training, confidence, health, and comfort of most working people, it also contributes to their ability to withstand unemployment and to form and advocate their own interests, and it thereby increases their bargaining power. In turn, having increased bargaining power means workers will be able to extract higher wages and better conditions at the expense of capitalist profits – and that’s the rub.

    It isn’t that owners are sadists, who would rather build missiles that sit in the ground forever than build a school that educates the poor because they revel in people being denied knowledge. It is that owners want to maintain their conditions of privilege and power, and they know that distributing too much knowledge or security and well-being to workers is contrary to doing so.

    Parecon’s technology

    How is parecon different? In a parecon, proposed technological investigation, testing, and implementation are pursued when the planning process incorporates a budget for them. This involves no elite interests, only social interests. If military expense will benefit all of society more than schools, hospitals, and parks, so be it. But if the social expenditures would benefit society more, as we can reasonably predict, then priorities will shift dramatically.

    But that is the relatively obvious part. What is really instructive is to look at the other choices mentioned earlier. In a parecon how do we assign values to the costs and benefits of an innovation in a workplace?

    A new technology can have diverse benefits and costs. If it doesn’t require any inputs or expenditure but it does have benefits, of course it will immediately be adopted. But suppose there are high costs for materials, resources, and human labor. We can’t afford to do everything, so choices must be made. If we produce another toothbrush, something else that would use the same energies and labors goes unproduced. On a larger scale, if we make one resource and labor-claiming innovation, some others will have to be put off. How is the choice made?

    The claim is that in a parecon the criteria for evaluating expenditures are that they will increase human fulfillment and development and that people must have a say proportionate to the degree they are affected. Without re-describing participatory planning in full, it may help to point out one very revealing aspect.

    If I am in a capitalist coal mine contemplating an innovation that would make coal mining less dangerous, and you are in a capitalist book publishing house contemplating an innovation that would make work there more pleasant, we each want the innovation in our own workplace for our own well-being. Neither one of us has any reason at all to be concerned about conditions beyond our work- place, nor do we have any means to know what is going on regarding worker fulfillment in other firms. We battle for our investment – actually, we try to accrue profits to pay for it. We don’t give a damn about other firms and, indeed, if we are to gain maximally, we should waste no time fruitlessly worrying about them.

    Now suppose the workplaces are in a parecon. Things change very dramatically. The coal miners have a balanced job complex, as do the publishing house workers. It isn’t just that each person in the coal mine has a job comparable to all others in the coal mine, or that each person in the publishing house has one comparable to everyone else in the publishing house, it is that all of us, taking into account our work inside as well as our work outside our primary workplace, have a socially average job complex. I do some coal mining and some quite pleasant and empowering work in my neighborhood, and you do some pub- lishing house work and some largely rote and tedious work in your neighbor- hood, and we have, overall, comparably empowering and fulfilling labor.

    How do we benefit from innovations in our workplaces? We all wind up with a balanced job complex. Benefits don’t accrue only in single workplaces, but average out over society. If the innovation in the coal mine makes the work there less onerous, the time I spend outside will change in accord. Likewise if the innovation in your publishing house makes work there even more pleasant than it already was. We all have an interest in technological investments that maximally improve society’s overall average job complex because that’s what determines the quality of the job we each wind up with. This means we have to be concerned with what occurs outside our workplace if we are to advocate what is, in fact, most in our own interest.

    In a parecon, what is best for society and what is best for oneself are essentially the same thing, and the norms guiding choices among technological possibilities are, therefore, in accord with all people’s self-managed desires rather than reflecting the preferences of a few who enjoy elite conditions and circum- stances. People might have different opinions and estimates of implications, but the underlying values are consistent. Parecon establishes the kind of context that both benefits and is benefited by technology in precisely the humanistic sense one would rationally prefer. (Realizing Hope; Life Beyond Capitalism - Michael Albert)

  • Boulder Dash 18th Mar 2019

    If there were machines which bore a resemblance to our body and imitated our actions as far as it was morally possible to do so, we should always have two very certain tests by which to recognise that, for all that, they were not real men. The first is, that they could never use speech or other signs as we do when placing our thoughts on record for the benefit of others. For we can easily understand a machine’s being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if it is touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do. And the second difference is, that although machines can perform certain things as well as or perhaps better than any of us can do, they infallibly fall short in others, by which means we may discover that they did not act from knowledge, but only for the disposition of their organs. For while reason is a universal instrument which can serve for all contingencies, these organs have need of some special adaptation for every particular action. From this it follows that it is morally impossible that there should be sufficient diversity in any machine to allow it to act in all the events of life in the same way as our reason causes us to act. (Descartes 1637, p. 116)

  • Boulder Dash 18th Mar 2019

    https://youtu.be/0ORHGa-vQp0

  • Irie Zen 19th Mar 2019

    111 | ONE ONE ONE

    • Irie Zen 19th Mar 2019

       ***Insert***


      [..]

  • Irie Zen 19th Mar 2019

    ### Short Circuit “M0R3 1NPU7.” (#1000101010101[..]0010110012)

  • Irie Zen 19th Mar 2019

    Bender_Bending_Rodriguez_Bending_Unit_22_Serial_No_2716057: "In the name of all that is good and logical, we give thanks for the chemical energy we are about to absorb. To quote the prophet Jerematic, one zero zero zero one zero one zero one zero one zero one..."


    [Time lapse]


    Bender_Bending_Rodriguez_Bending_Unit_22_Serial_No_2716057: "Zero zero one... zero one one zero zero one... two. Amen."


     ***Insert***


    [..]


    FFF


    1000 1000

  • Boulder Dash 19th Mar 2019

    Jeez Irie, all those extra comments in my inbox just for neatening things up a bit!

  • Bat Chainpuller 20th Mar 2019

    • Boulder Dash 20th Mar 2019

      Out To Lunch - Splash'n'Klang, pigeon rescue
      I Digress Indeed - Guitar

      Splash'n'Klang is a musical practice developed by Out To Lunch in response to various problems facing modern music. Over the course of the twentieth century, recording wrecked the old composer-score-musician arrangement, enabling advanced music to dissolve the distinction between documentary sound and composed score (see Derek Bailey, Iancu Dumitrescu and Frank Zappa's "Wolf Harbor"). However, although they play by ear and could invent every note, thus making each performance unique and extending musical variety and delight into infinity, most non-reading bands survive by reproducing known quantities. In order to encourage the musicians of AMM All-Stars to suspend musical time and pay attention to the sounds immediately in front of them, Out To Lunch records Splash'n'Klang in his bathtub: running taps, rattling the plug, pouring water from jugs, making bubble sounds, and striking floating bowls and glasses with plastic chopsticks. These rituals are then played during the weekly improvisation session by AMM All-Stars which constitutes Late Lunch With Out To Lunch, a radio show on Resonance FM (2-3pm Wednesdays). Splash'n'Klang was partly arrived at through email discussions with guitarist I Digress Indeed about listening to records whilst washing up, and noticing that emancipated music (Sun Ra, Derek Bailey, Tony Oxley, Eugene Chadbourne, Music With My Insane Friend, AMM All-Stars) renders each kitchensink noise vibrant, delicious and beautiful ("the thud of a saucepan as it hits the zink" as Richard Evans put it). OTL wanted to find a way of injecting these beauties into his broadcasts. He also acknowledges that humans and animals delight in water sounds because they remind them of urination (see James Joyce's Chamber Music), and also of time spent in the womb and ancestral memories of all life's oceanic origins. On 12th February 2019, possibly due to the drama of freeing a pigeon brought in by his two cats (Brella and Sox), OTL recorded a record-length Splash'n'Klang, and realised his application to this instrument had reached some kind of peak. So the soundfile was offered to I Digress Indeed (guitarist of Music With My Insane Friend) and this duet (one of three) resulted: "I can now join the pantheon of my personal household gods, and die happy" quoth Lunch.

  • Boulder Dash 22nd Mar 2019

    It was not for nothing that Evil Dick used to call his label Polemic Music. This delusion that "good music" hovers above us sordid social animals like some unearthly pristine unctual anointment is a running sore which hurts us daily. In a false world, Truth becomes Polemic, not out of perverse itch of aggrandisement but simply by being true to itself. The proselytisers for "quality" are running stupid petty marketstalls whilst pretending they're in a holy temple. Overturn the tables! Wreck the sales racks! In order to foreground the fact that all art is social (or anti-social!) statement, I wrote biographies of Zappa and Bailey - inspiral rubbishers of quality norms - but this "intellectual" approach got me the wrong audience. So I sent them over-educated suckers off to peruse Wire magazine, and tried something else: radio. After seventeen years, I'm glad to say I've gathered various Mitglieder (aka AMM All-Stars) and become a mite louder. Here is my most condensed and organised polemic yet, where I challenge you to tell the difference between: Blues, Freak Rock, Punk Rock, Free Improvisation, Politics, Entertainment and Poetry (with a little ugly something - probably called "Graham" - on the side ...).

  • Bat Chainpuller 26th Mar 2019

    At least this guy talks about participatory decision making in relation to AI...open source and blockchain...at least that stuff is connected to P2P and Common transition stuff...and it's on Joe Rogan...fuck me...but Joe still ain't no hippie and socialism ain't one of his pet likes...no examples of it ever working anywhere according to him and it leads to the gulags (not sure if he's ever said that actually, but I reckon he believes it)...or as Eric Weinstein said...and he's probably the second smartest person in the world...we don't want nice, good's alright, but nice leads to the gulags! But this guy reckons hes a kind of hippie anarcho socialist libertarian...he says it at some point...perhaps I miss heard...



    "SingularityNET’s Ben Goertzel has a grand vision for the future of AI
    Mike Butcher@mikebutcher /

    SingularityNET — an ambitious project to create a decentralized marketplace for AI — has raised a lot of money in its token sale. In around 60 seconds after opening the sale to the public, it sold out of the whole amount of available tokens (the AGI token), bringing the total raised to $36 million.

    However, in this day and age, a startup raising a lot of money in an ICO is not really of interest, at least to many. This is part and parcel of the crazy, unregulated, crypto world these days. But what is interesting is what SingularityNET actually plans to become.

    Dr. Ben Goertzel, the CEO
    and founder, has a grand vision.

    SingularityNET brings AI and blockchain together to create a decentralized open market for AIs. The implications are that it could let anyone monetize AI, allowing companies, organizations, and developers to buy and sell AI algorithms at scale, thus lowering costs and increasing the capabilities of the AIs. Eventually, the plan is to plug Hanson Robots’ “Sophia” robot literally into SingularityNET’s network in order to power its brain. If this sounds like a Sci-Fi movie, that’s because a lot of Sci-Fi has exactly this plot line.

    In an interview, I asked him how it will all work:

    “Proprietary marketplaces exist, like the Amazon Web Services for instance. What we’re creating here is a decentralized marketplace, more like BitTorrent. There’s no central dictator deciding what gets in there. Anyone can put an AI online, wrap it in our API, announce it to the network and any business that needs AI as a service can request it.

    “Then you need a good reputation system to grade the best AIs with a high rating. We need blockchain to let us do this in a peer to peer, decentralized way. P2P software that is reliable and not easily hackable and involves payment, but it would still need a distributed ledger. BitTorrent doesn’t involve payment or Identity management or high security, which is why we are doing this on blockchain.

    “AI processing is compute time sensitive. So we chose the OpenCog platform, which plays two roles. One is that it’s one of the many AI tools that we will use to build some initial AI services in the network, just the way Apple also sells its own apps. We will put some cool tools in there. The other role is closer to the infrastructure.

    “We want to make the system so that the AI layer is independent of what blockchain we are using. Our prototype uses Ethereum, but this is too slow. So we want to be able to swap out the blockchain technology easily. So, the smart contracts need to be expressed in an abstract way which is independent of the blockchain we are using. And OpenCog’s logical language works well for this.

    “If an AI in the marketplace is not useful it will get a lower rating. But in this sort of market something doesn’t have to be useful to everyone. So for instance, you could have an algorithm that was only useful to coffee farmers in Africa. I mean, Amazon Cloud doesn’t offer that because coffee farmers are not high on many of its lists to address. But it could still be valuable to some people. Like coffee farmers!

    “It’s like having Elance, but the AI is doing a service, not a person.

    Could it be scalable? AI algos aren’t invented in big tech companies anyway, they are invented by academics and students. So if you have a way for the inventors to wrap their algorithms in a decent API, then you allow the creators to benefit, rather than them having to create a startup and sell it to a big company.

    “Instead of putting their code on GitHub, they can put it on SingularityNet and it can be found by businesses and then they can get paid for it. On the end-user side, we need to get businesses into the habit of connecting their code to SingularityNet APIs rather than Amazon’s APIs or startups which have a nasty habit of disappearing when the startup gets bought.

    “Much of my own motivation is to get a greater level of AI behind all sorts of software like the Hanson Robots, using it as a showcase of the SingularityNET. As this gets smarter we are going to point Sofia’s brain to SingularityNET, moving it from the Hanson cloud into the SingularityNET, such as using computer vision. We should have some substantial improvements by August next year. We gain from the ability for anyone around the world to improve Sofia’s mind. We can’t hire all these enthusiasts but we can give them an easy way to contribute to Sofia’s mind.

    “So a kid could make millions of dollars form putting their computer vision AI into our market, for instance.”

    It’s a very, very big vision. Look out Amazon is all I can say…"

    These nerdy opensource guys...these coders and algorithmers and software developers...they're gonna fix everything...and they're so cute the way they talk...“We want to make the system so that the AI layer is independent of what blockchain we are using. Our prototype uses Ethereum, but this is too slow. So we want to be able to swap out the blockchain technology easily. So, the smart contracts need to be expressed in an abstract way which is independent of the blockchain we are using. And OpenCog’s logical language works well for this."