Happy Ent

  • Content count

  • Joined

  • Last visited

About Happy Ent

  • Rank
    Godfather of the Weirwoods
  • Birthday 07/01/1968

Contact Methods

  • Website URL
  • ICQ

Profile Information

  • Gender

Recent Profile Visitors

14,044 profile views
  1. To Hell With you 2016!!!

    I’m much more confident about the world at the end of 2016 than at the end of 2015. Democracy has won, elections have won. I “lost” both of the big elections in the anglosphere (in the sense that I rooted for the losing side), but in both cases it was proved that the people actually can change society in the face of massive propaganda. I find that very heartening. (I think both decisions were wrong. But wrong decisions are part of democracy.) Politics has become interesting again, instead of everybody blindly following the Davos elite in their disgusting global capitalist circle jerk towards a authoritarian multiculturalist hell with an insanely rich upper class. Identity politics died in 2016. That is great. Multiculturalism died. Postmodernism took a big hit. Global capitalism took a bit hit. These things are absolutely fantastic for me. Europe will reconsider some of its absolutely insane politics regarding immigration. Maybe even the Euro? Who knows. Everything is suddenly up for debate. Parts of the left is suddenly waking up and understanding that it’s about class. The totalitarian left has been badly shaken. Possibly some liberal democrats (in the narrow sense) like me will be able to recover some ground on the left and start agitating for civil liberties again.
  2. The Ethics of Artificial Intelligence

    I’ve alerted the moderators. It was nice knowing you, Zab.
  3. The Ethics of Artificial Intelligence

    I hasten to add that of these three, only Gates is a computer scientist (without a Ph.D., but clearly competent—I know his advisor, by the way.) It’s fair to say that fears of superintelligence (a particular AI scenario) are most firmly rooted among thinkers outside of Computer Science. Philosophers, mathematicians, physicists, etc. In contrast, CS people (like me) mostly have a different view: I consider General AI to be a very difficult problem and see no reason at all for believing that we will ever solve it. However, Stuart Russell, a more notable voice than Musk and Hawking, is in complete disagreement with me and was recently interviewed on Sam Harris’s podcast: https://www.samharris.org/podcast/item/the-dawn-of-artificial-intelligence1 . His must be the most trustworthy voice pro imminent robot apocalypse. For the record, although it should be clear from this thread: I believe that the problems by stupid (non-general) AI (which is just “algorithms”) are very real; no appeal to Skynet is necessary for worrying about AI.
  4. Convince me that breakfast wasn't a terrible act of self-harm

    I extract nutrients from the soil even while I’m asleep.
  5. The Ethics of Artificial Intelligence

    Still… the Alt-Right is now based on a pillar of “religious traditionalism?” Last time I checked they were atheists! It’s a fine piece that accurately describes some trends in internet culture, and which are good to know about. But as a description of the Alt-Right (whatever that means) it not only fails, but it seems to fail deliberately. The atheism (and general rejection of conservative social values), as well as the principled defence of basic civil liberties around information (privacy and anonymity) are very, very important trends in the Alt-Right as well, and even a cursory glance at “the movement” makes that clear (and is well reported, so hard to miss for a serious writer). Still, this article, together with the “original” Breitbart article (an-establishment-conservatives-guide-to-the-alt-right (Mar 2016)) form a decent picture. At least from where I’m standing.
  6. The Ethics of Artificial Intelligence

    The Open Society and its Enemies, Popper’s great analysis of authoritarianism. (Popper is mostly known for his theory of science, but his political philosophy is much more important, I think.) I finally read the book (having read a lot of secondary stuff about it over the years) and found it incredibly clear and easy to read. Highly recommended.
  7. The Ethics of Artificial Intelligence

    That is the crux, but (and now comes my nontrivial point) it’s the crux of all systems of governance. You can replace “technology” with any “the state” and you have the exact same question for the exact same reason. Even speaking as a technologist, I have to point out that these questions are questions of political philosophy, not of artificial intelligence. So what is the answer? This depends. If you are with Plato’s The Republic, you will arrive at some (totalitarian) answer. If instead you took the Popper-pill, you arrive at a different answer. If you are an anarchist (in which I include libertarians), you arrive at a different answer still. My answer (because Popper) is that the universality of the rule of law Holy. This entails outcome differentials, and these differential will track group membership. (In particular, society will be biased when viewed from the perspective of outcomes.) Outcome differentials is (to me) the inevitable consequence of fairness: the paradox of liberty is exactly that in a fair society, all the variation will be explained by variables that society does not control. So it’s a deep question. Algorithms make this deep question very visible and operationalise it.
  8. US politics: Heil to the Chief :(

    Yes, let’s. This is exactly what many countries do, as a matter of routine. The election night count is preliminary (and optimised for speed), then there is an official “recount,” which is the one that is binding. Of every single vote. All of which were cast on paper. When I (this morning, on Swedish TV) explained this to a journalist, she was honestly surprised that the US does not do this. The slow, transparent count of paper ballots increases trust in the election. If an election system (such as the US) sees that trust is eroding, a risk-limiting auditing process is the standard, un-exciting, age-old way of incrementally improving a system. This is the right thing to do, in particular in a relatively untrustworthy system like the American one, to make it a bit better. Not perfect (perfect systems don’t exist). There are many other aspects of the US system I’d like to see improved, just like there are many aspects of the Swedish system. The only way democracy progresses is by well-reasonsed, incremental improvement to the democratic institutions. (Popper, popper, popper, broken record.)
  9. US politics: Heil to the Chief :(

    He did no such thing. https:[email protected][email protected]6113b0ba#.9gl8wvma9 “That article, which includes somebody else’s description of my views, incorrectly describes the reasons manually checking ballots is an essential security safeguard (and includes some incorrect numbers, to boot). Let me set the record straight about what I and other leading election security experts have actually been saying to the campaign and everyone else who’s willing to listen.”
  10. US politics: Heil to the Chief :(

    Halderman is not a moron, and his argument is not moronic. I don’t think there is anybody on the planet who I trust more in e-voting questions than Alex. (Also, nice guy.) Voting machines are a bad idea because they are very easy to manipulate. (Alex himself has demonstrated that many time.) Recounts are a good idea because they increase trust in the electoral process, which is the most important aspect of the voting system. (There is a mini-course in political philosophy and democratic theory here, which I won’t bore you with.) Many democracies have routine recounts built into the process. (As opposed to recounts that are triggered by a formal protest, like the US.) This is a good idea. Many countries eschew voting machines (mechanical or electronic) and use paper ballots and transparent hand-counting. This is a good idea. These systems enjoy very high trust. The US is currently experiencing a (completely correct, predictable, and democratically healthy) decrease of trust in the voting process, to some extent motivated by the increase of opaque (machine-based) voting processes that are not routinely audited. This problem can be addressed exactly in the way that Alex (and many other experts in voting systems, including me) advocate. Risk-minimsing audits are a tiny step in the right direction (and very far from the systems routinely employed in many other countries). This is not a principled criticism of US democracy, but a correct, constructive, and highly welcome suggestion of an incremental change toward a somewhat better system.
  11. The Ethics of Artificial Intelligence

    One other thing, at the danger of stepping on the Americans’ toes: Bias is inevitable. Bias is the whole idea. An algorithm that would be unbiased would make downright terrible decisions. The algorithm isn’t broken if it is biased—instead, it works according to specifications. All inference is bias, all deduction is bias. Being wrong is a problem. Being biased is not. Some may think that bias is a moral problem, but this is an epistemological non-starter. It posits that all groups, no matter how you classify (age, sex, height, body mass index, wealth, zip code, nationality), are identical in needs, abilities, etc. That position is not only false, it is also non-operational. You cannot make any decisions based on data if you don’t bias, but data-driven decision making was the whole point of the exercise (as opposed to the only unbiased method: random decision making.) I understand the instinctive aversion to this situation. In particular, I understand the objection “Yes, but what I meant was unfair bias, or wrong generalisation, or …” But these are all non-starters. The effect of correct bias is just as unfair (on the individual recipient) as of incorrect bias. Whether you are unfairly treated as a child or fairly treated as a child (given you are a child) makes no difference: you are treated as a child. The system correctly infers biases based on (in this case) age, and you can replace by lots of other variables, including taboo ones. The O’Neill book has a good example about zip codes. The US allows discrimination based on zip code. It disallows discrimination based on race, for historical reasons. So algorithms (and people) discriminate based on zip code, which makes wonderfully correct predictions. In fact, better predictions that race. (Race predicts socioeconomic status well, but zip code does it better.) Now, both biases (the legal one and the illegal one) are factually correct (in that they are statistically sound). They work exactly as we want them to work. They discriminate (which is what they were designed to do) based on correct biases. That was the point. Of course, if you are targeted by these decisions (whether inferred by a human or a piece of code), you are just as screwed. If you failed to choose your parents wisely, you are subject to a completely correct and biased decision that you had little choice on. (The solution is of course principled Equality Before the Law, but that is a moral-political decision equally abhorrent to the Right and Left, and one that is orthogonal to this thread, because it has little to do with algorithms.) But bias is exactly what we want from these algorithms. And the world is ugly and terrible biased, so the better the algorithms are, the more biased they will be. This is not in itself the problem. The problem arises from which decisions we base on the results.
  12. The Ethics of Artificial Intelligence

    Zabzie, a book that you might want to check out is Weapons of Math Destruction by Cathy O’Neil, who is competent about the subject matter and might share many of your societal concerns. (Caveat: I would avoid the term AI, because it carries too much baggage about agency. “Algorithmic decision making” or something like that is a better term, which leads to clearer thinking about the problematic impact. I care about these issues a lot (I’m an algorithms professor).) If you’re hesitant about investing the time to read a whole book, O’Neil was interviewed on the Econ Talk podcast, which I found worth listening to (while commuting or cleaning the house, say): Econ Talk podcast with O’Neil.
  13. The slow revolt of Western electorates

    Amen, Iskaral. That is why this very site is good, in particular when it is heterodox. As for avoiding click-bait, check out these tips. #4 is so good it will blow your mind.
  14. US Politics returns: the post-Election thread

    Is this question far different than the question why we on the Left accept rampant antisemitism in our own ranks as well? Can you be a Corbyn-supporter without also eating Jewish babies, for instance? Antisemitism is one of the few constants of life, as is the charge of antisemitism. I have seen few debates that improve from pointing it out in the Outgroup. Much is to be gained by finding it in the Ingroup. (For the record, I’m a philosemite and zionist.)
  15. Now you’re artificially inventing obstacles. I have sympathies for that mindset (I suffer from the same), but it’s a mind-killer. Remove it from your emotional arsenal, one opportunity at a time. (You will never defeat it. That’s not the goal. The goals are concrete and external.) Somebody just extended trust to you. Take it. Google “Letter of recommendation” maybe with “for graduate school”. You will find maybe a dozen different formats and tones. Pick one that you like. (If you want my opinion: I am a sucker for concrete, verifiable, data. So I’ll take one in the style I sketched 3 posts above. Just list the contexts in which the prof has met you, from the prof’s perspective. If there are grades to brag about, do that, and explain it. (”The grade גדול, which is the best grade in that class.”) Be precise, include as much data as you can (dates, grades, title of course, title of project, place of excursion, etc.) If you can tone these things to the applied-for position, you’re golden. (“In particular, Datepalm’s very high marks in X would be perfect for the position in Y, because of Z.”) Somebody who reads the application wants it to be (i) concrete and (ii) tailor-made. If you prof has too much time, he’ll run that extra mile (actually finding out which courses you took, and which bloody position you apply for.) But most profs have better things to do. LoR-writing is exhausting and time-consuming (if done well). For exactly the reasons that make it trivial for the student to do herself. The final paragraph is “I strongly suggest considering Datepalms application in X. She is blah, blah, and blah. Please do not hesitate to contact me for further details.” Finish the letter, with dates, addresses, etc. All the boring stuff. You then show up at the prof’s office, and hand over the text, electronically. (Email that you send from your phone right there.) “I was unsure about what to do with the final paragraph, and I assume you want to change it completely.” (geeky, disarming smile.) “Look, I really appreciate you doing this for me. Is there anything else I can do to help?” He or she will then change the adjectives and (if they are like me) turn up the superlatives to 11 (if they mean it and can actually remember who you bloody are) or damn you with faint praise (“always showed willingness to improve her grades and was aware of the possibilities for change in her work ethics or view on R Scott Bakker.”) If this takes more than 2 minutes for you prof, it will not get done.