netwit 2.01

#WEB+#WIKILEAKS+#ASSANGE+#Jónsdóttir – Did Julian Assange Learn The Politics Of V For Vendetta From Birgitta Jónsdóttir?






Did Julian Assange Learn The Politics Of V For Vendetta From Birgitta Jónsdóttir?



Advertisements

January 10, 2011 Posted by | Birgitta Jónsdóttir, cognitive infilltration, cyber attacks, cyber crime, cyber terrorism, cyber warfare, cyber-utopianism, cyberpunk, cypherpunk, Government 2.0, hacker culture, internet activism, Jónsdóttir, Julian Assange, V for Vendetta, WikiLeaks | Leave a comment

WEB WEBSITE #URLsematicweb {#sematicweb #RDAa #Facebook #Twitter #Google}



#RDFa Support: How Do #Google, #Facebook & #Twitter Compare? And How Will This Impact Their Position In The #Semantic #Search #Engine Market?


Bernard Lunn,  26 June 2010


*  plumps for #Facebook at time of writing



January 10, 2011 Posted by | connective knowledge, FaceBook, Google, RDF, Semantic Web, social media, social networks, social semantic web, Timothy C. May, Twitter, Web 2.0, Web 3.0 | Leave a comment

INTERNET WEB The Sematic Web & Twitter



Post in blog  Social Whisper:


The Sematic Web & Twitter

9 December, 2008


links to posts (that post-date the post) :

How Twitter could beat Google to the semantic web

Mark Evans at Twitterrati on anonymity in Twitter.



January 10, 2011 Posted by | Semantic Web, social media, social networks, social semantic web, Twitter, Web 2.0, Web 3.0 | Leave a comment

SEMANTIC WEB Google, Twitter and Facebook build the sematic web



Google, Twitter and Facebook build the semantic web


Jim Giles, New Scientist, 2 August 2010   [subscription only]


This is the introduction:

A TRULY meaningful way of interacting with the web may finally be here, and it is called the semantic web. The idea was proposed over a decade ago by Tim Berners-Lee, among others. Now a triumvirate of internet heavyweights – Google, Twitter and Facebook – are making it real.

The defining characteristic of the semantic web is that information should be stored in a machine-readable format. Crucially, that would allow computers to handle information in ways we would find more useful, because they would be processing the concepts within documents rather than just the documents themselves.

Imagine bookmarking a story about Barack Obama: your computer will store the URL, but it has no way of knowing whether the content relates to politics or, say, cookery. If, however, each web page were to be tagged with information about its content, we can ask the web questions and expect sensible answers.

It is a wildly attractive idea but there have been few practical examples. That’s about to change.

Google’s acquisition this month of Metaweb, a San Francisco-based semantic search company is a step in the right direction. Metaweb owns Freebase, which is an open-source database. Why would Google want Freebase? Partly because it contains information on more than 12 million web “entities”, from people to scientific theories. But mostly because of the way in which Freebase accumulates its knowledge – it is almost as if a person were doing it, making links between pieces of information in a way that makes sense to them.

Freebase entries, culled from sources such as Wikipedia, are tagged so that computers can understand what each is about and link them together. Freebase lists, for example, that one entry for “Chicago” is about a city and another describes the hit musical. Entries are also linked to other relevant entries, such as other towns or shows.




January 10, 2011 Posted by | FaceBook, Google, Semantic Web, social media, social networks, social semantic web, Tim Burners-Lee, Twitter | Leave a comment

INTERNET WEB Cyberspace Policy Review



Cyberspace Policy Review
– Assuring a Trusted and Resilient Information and Communications Infrastructure

* 76 page draft paper

* Useful  timeline graphic on page 78, titled  ‘History Informs Our Future’ from 1900 to the present, which highlights key technological and legal milestones.



Open in another tab to read.


Kim Cameron’s Identity Blog post 27 June 2010 gives short review:

National Strategy for Trusted Identities in Cyberspace



January 10, 2011 Posted by | Bradley Manning, cyber attacks, cyber crime, cyber espionage, cyber security, cyber terrorism, cyber warfare, cyberpunk, cyberspace, Cyberspace Policy Review, cypherpunk, data leakage, Department of Homeland Security, encryption, Government 2.0, hacker culture, Indentity Ecosystem Framework, info-war, insider security, insider threats, Internet, internet activism, internet-centrism, Julian Assange, Manning, National Center for Cybersecurity and Communications (NCCC), National Security Agency [NSA], National Strategy for Trusted Identities in Cyberspace, net neutrality, Network security, network theory, NSA, NSTIC, Open data, open source, Protecting Cyberspace as a National Asset Act (PCNAA), Semantic Web, social media, social networks, social semantic web, social silos, techno-libertarianism, Tim Burners-Lee, Web 2.0, Web 3.0, WikiLeaks | Leave a comment

INTERNET WEB National Strategy for Trusted Identities in Cyberspace #NSTIC



National Strategy for Trusted Identities in Cyberspace

– Creating Options for Enhanced Online Security and Privacy

June 25, 2010

 

* Department of Home Security draft paper

*  Proposal for an Indentity Ecosystem Framework

* Appendix of terms



January 10, 2011 Posted by | cypherpunk, data journalism, data leakage, Department of Homeland Security, digital journalism, encryption, Government 2.0, hacker culture, Indentity Ecosystem Framework, info-war, insider security, insider threats, Internet, internet activism, investigative journalism, Julian Assange, National Center for Cybersecurity and Communications (NCCC), National Security Agency [NSA], National Strategy for Trusted Identities in Cyberspace, net neutrality, network anomalies, Network security, network theory, NSA, NSTIC, Open data, open source | Leave a comment

SEMANTIC WEB Is #Twitter the #Sematic #Web? @timbuckteeth



Is #Twitter the [#Semantic #Web]?

Prof Steve Wheeler @timbuckteeth


#edupunk


personalised learning

January 10, 2011 Posted by | connective knowledge, Semantic Web, Web 2.0, Web 3.0 | Leave a comment

SEMANTIC WEB GOOGLEBOOK The Social #Sematic Web



The Social Sematic Web

by

John G Breslin
Alexandre Passant
Stefan Decker

2009




January 10, 2011 Posted by | Semantic Web, social media, social networks, social semantic web, Web 2.0, Web 3.0 | Leave a comment

MICROBLOGGING #Microblogging: A #Semantic and Distributed Approach



Microblogging: A Semantic and Distributed Approach


Alexandre Passant1, Tuukka Hastrup2, Uldis Bojars, John Breslin

1 LaLIC, Université Paris-Sorbonne,

28 rue Serpente, 75006 Paris, France

2 DERI, National University Of Ireland,

Galway, Ireland


…..Twitter users have adopted certain short-hand conventions in their writing called hash tags6, but their semantics are not readily machine-processable thus raising the same ambiguity and heterogeneity problems that tagging causes. For example, the hash tag #paris could mean various things (cities, people etc.) depending on the context, and so cannot be automatically processed by computers. This lack of data formal-ism also makes finding relevant content difficult. While some services provideplain-text search engines, there is no way to answer queries like ”What are thelatest updates talking about a programming language” or ”What is happening now within ten kilometres from here”.

~

Thus, there is a need to (semi-)automatically extract those URIs or con-cepts from plain text or to let users annotate it similarly to what they can already do on Twitter with hash tags, but with more powerful processing thatcan extract and define URIs based on those tags. For example, rather than

~

writing ”Visiting #Eiffel Tower in #Paris”, someone could microblog ”Visiting#dbp:Eiffel Tower in #geo:Paris France” so that the processor would be able to extract the two hash tags and thanks to a predefined prefix mapping process,query DBpedia [1] and GeoNames10 to retrieve URIs of the related concepts.Thus, the updates would be automatically linked to existing URIs rather than to simple and meaningless – from a software agent point of view -text strings.




January 10, 2011 Posted by | Semantic Web, social media, social networks, Tim Burners-Lee, Uncategorized, Web 2.0 | Leave a comment