netwit 2.01

BATTLE FOR THE INTERNET Google’s Sergey Brin: state filtering of dissent threatens web freedom

Day 3 Guardian 7 day series Battle for the Internet

Google’s Sergey Brin: state filtering of dissent threatens web freedom

Charles Arthur, Guardian 18 April 2012

April 18, 2012 Posted by | FaceBook, Google, Internet, Internet filtering, walled gardens | , , | Leave a comment

Google Hands Wikileaks Volunteer’s Gmail Data to U.S. Government

Google Hands Wikileaks Volunteer’s Gmail Data to U.S. Government

John Titlow, ReadWriteWeb 10 October 2011

The contacts list and IP address data of Jacob Appelbaum, a WikiLeaks volunteer and developer for Tor was given to the U.S. government after they requested it using a secret court order enabled by a controversial 1986 law called the Electronic Communications Privacy Act, according to the Wall Street Journal.

October 11, 2011 Posted by | Google | Leave a comment

Google, Cloud Computing and the Surveillance -Industrial Complex

Google, Cloud Computing and the Surveillance-Industrial Complex


Christopher Ketcham and Travis Kelly

CounterPunch, April 1-15 2010

April 2, 2011 Posted by | Cloud computing, cyber security, cyber-tools, cyber-utopianism, cyberspace, Google, Lockheed Martin, NetOwl Programme, SRA | Leave a comment

#WIKILEAKS #China ~ WikiLeaks: China’s Politburo a cabal of business empires [6 Dec 2010]

WikiLeaks: China’s Politburo a cabal of business empires
Peter Foster, Beijing, The Telegraph, 6 Dec 2010

The WDIK column
When this article was first published, there was no rush of articles musing about the similarities between the Chinese and U.S. Systems. Who in mainstream U.S. media would dare to moot such parallels?

It seems all states evolve into this sort of set-up. With Gore Vidal’s words about the U.S never having been a democracy echoing (what then an ‘electoral state’ ?), it’s quite easy to grasp that the U.S. might have been designed from inception to operate in the way the U.S. diplomats in the Wikileaks U.S cable leaks describes present day China.

Perhaps there is a rule of statehood, governance, power, influence networks, that states this kind of arrangement is the default to which states – which didn’t start off like like that – revert to over time. A kind of biological-social law.

In a post WikiLeaks world – full of notions of P2P – what is essential is that spidercharts of influence (influence landscape) are draw up to enable individuals to make up their own minds whether to trust a politician or business leader.

A series of flashcards in Netwit 2.1 will link to ideas on trust models. They won’t necessarily come consecutively but as they are found. I prefer visualisations to long screeds, so they will more often than not be graphics to aid thinking, rather than complete explanations. The first one which came from business trust modelling, has at its core ‘capacity for trust’.

January 15, 2011 Posted by | China, cyber attacks, cyber espionage, cyber security, cyber warfare, Google, U.S.Embassy cables, WikiLeaks | Leave a comment

WEB WEBSITE #URLsematicweb {#sematicweb #RDAa #Facebook #Twitter #Google}

#RDFa Support: How Do #Google, #Facebook & #Twitter Compare? And How Will This Impact Their Position In The #Semantic #Search #Engine Market?

Bernard Lunn,  26 June 2010

*  plumps for #Facebook at time of writing

January 10, 2011 Posted by | connective knowledge, FaceBook, Google, RDF, Semantic Web, social media, social networks, social semantic web, Timothy C. May, Twitter, Web 2.0, Web 3.0 | Leave a comment

SEMANTIC WEB Google, Twitter and Facebook build the sematic web

Google, Twitter and Facebook build the semantic web

Jim Giles, New Scientist, 2 August 2010   [subscription only]

This is the introduction:

A TRULY meaningful way of interacting with the web may finally be here, and it is called the semantic web. The idea was proposed over a decade ago by Tim Berners-Lee, among others. Now a triumvirate of internet heavyweights – Google, Twitter and Facebook – are making it real.

The defining characteristic of the semantic web is that information should be stored in a machine-readable format. Crucially, that would allow computers to handle information in ways we would find more useful, because they would be processing the concepts within documents rather than just the documents themselves.

Imagine bookmarking a story about Barack Obama: your computer will store the URL, but it has no way of knowing whether the content relates to politics or, say, cookery. If, however, each web page were to be tagged with information about its content, we can ask the web questions and expect sensible answers.

It is a wildly attractive idea but there have been few practical examples. That’s about to change.

Google’s acquisition this month of Metaweb, a San Francisco-based semantic search company is a step in the right direction. Metaweb owns Freebase, which is an open-source database. Why would Google want Freebase? Partly because it contains information on more than 12 million web “entities”, from people to scientific theories. But mostly because of the way in which Freebase accumulates its knowledge – it is almost as if a person were doing it, making links between pieces of information in a way that makes sense to them.

Freebase entries, culled from sources such as Wikipedia, are tagged so that computers can understand what each is about and link them together. Freebase lists, for example, that one entry for “Chicago” is about a city and another describes the hit musical. Entries are also linked to other relevant entries, such as other towns or shows.

January 10, 2011 Posted by | FaceBook, Google, Semantic Web, social media, social networks, social semantic web, Tim Burners-Lee, Twitter | Leave a comment