Thursday, June 25, 2009

Initiating a Corporate Social Media Presence – Unleash Your Inner Star Power

Social Media’s a scary animal, especially for companies or organizations that are accountable to stakeholders, policy, law or any other governing entity that exists to mitigate risk. Also, social media is an online collaboration channel and tools domain that’s most appropriately and effectively utilized by humans, i.e. individual personalities (preferably employees) – vs. corporate personas or third-party services. So how does a company begin to use social media, break into and contribute to the online dialogue, and avoid reputation issues while maintaining appropriate accountability?

Find, identify, nurture, coach and ultimately unleash your employee social media stars – they’ll be the face of the company, the purveyors of online dialogue, and will most likely do a great job at it. Why and how?

First of all, the social media platforms and tools are the dominion of the Internet-literate, the digerati, typically those more inclined and interested to communicate online at least as often as they relate offline (or by phone). It’s usually easier to find and identify folks like this, than it is to train them – since most open source social media skills can only be learned “in production”, i.e. without a “training” environment and IDs – most people who easily navigate social media have already learned how to use these tools on their own, with their own personas or via external interests. They’ve already taken some risks, exhibited some courage, and learned some lessons. Hire or identify a “Social Media Evangelist”, one like this with proven and public credentials, to help guide your strategy and tactics. Then canvass your employees, survey the web, find employees who are active social media users already. These are the seed candidates, the American Idol “you’re going to Hollywood!” group.

Next, filter and weed – not all users of social media use it well, or use it in a manner consistent with your corporate culture, public presence, company policies or communication styles, etc. Some may be “power users” – but really don’t write well enough to represent the professionalism of your company, notwithstanding general acceptance of all kinds of online slang and abbreviations. Some don’t typically consider the “bigger picture” or context-specific etiquette when posting, for example where the post might end up – and how it might be interpreted. After multiple rounds of syndication, digging, mixxing, friendstering, tarpiping, etc., posts might too often get blurry between business and pleasure, thereby losing effectiveness of intention (and probably becoming just a bunch of noise). Those who really do represent and focus on their interests or agenda using well-formed language and current, accurate references – with obvious intentions and open agenda – are the targets for your external “Digerati” (i.e. your “Social Media Liaisons”, or “Corporate Communications Liaisons”).

Now you’ve crowd-sourced the “experts”, it’s time to initiate them, molding their expertise according to your company’s interests – while not unnecessarily diluting interesting personalities. This means a structured program of personal brand coaching, training and perhaps apprenticeship in the finer arts of public relations, communications and Internet Marketing 2.0 – as delivered via Social Media channels and aligned to your company’s traditional web presence. (Your company should have developed a “Policy on Public Discourse”, or something like it, governing communications by employees in public and on the Internet - we've implemented this at Blackstone Technology Group). Since there are not yet very effective or available tools for automating social media governance, demonstrations and internal discussion with the Social Media Liaison, as postings are made, work best (i.e. learn by example) – and then a review and commenting process of postings by the “trainees”, as they start to participate. While the social media landscape is always changing, and new tools and methods are popping up all the time, the foundations of successful public discourse on the Internet don’t really change:

Be real, be yourself, be wise to current policies and public relations objectives of your company and context, don’t spam or be negative, don’t miss opportunities to appropriately use marketing keywords, and by all means give back to the communities you participate in.

Success in this process yields the right number and distribution (i.e. across various topic areas or corporate functions) of employee social media “stars”, who most effectively represent the very positive, engaged and personal attractiveness of your company and their personal brand on the Internet. Not only will the company prosper through social media use, but employees will also find their voice and opinion more readily exposed, thereby promoting a lot of pride in their contributions to their company, their fellow employees and their individual careers.

If you’d like more information on this subject, just drop me a (social) line. I’ve been experimenting with this process from both the employee and employer perspectives – it certainly does take a lot of in-person guidance and coaching to learn to effectively use social media without harming your personal reputation, career or corporate/stakeholder interests.

Tuesday, June 23, 2009

Government Social Media Reputation Management in the Cloud

During this morning’s IAC breakfast, discussing “Transparency, Collaboration and Web 2.0”, a panelist made a very interesting point. While the US Federal Government’s use of Internet social media services and cloud-based information-sharing applications is well underway, albeit at the very earliest of stages (mainly due to significant policy, privacy, security and simple “newness” issues), by far one of the major risks lies with accountability.

Accountability in collaboration and information-sharing environments is typically achieved to some degree by association of metadata with the information packages being exchanged, or with the “containers” of online events and the trusted identities of those participating in the dialogue. With social media applications and contexts like Twitter or Facebook, however, there are far too many ways that the “information packages” (i.e. unmanaged conversation bites) get exposed, syndicated and shared – disassociated from what I’ll call the “accountability metadata”. Accountability metadata might be described as one part records management (provenance, chain-of-custody, attribution, etc. of the actual material), one part situational awareness (i.e. the UCORE model; who, what, when, where regarding the actual event context being discussed), and one part “trust in context”, or “reputation” (i.e. popularity index, authority index, security attributes, etc.), and one part semantic accuracy (i.e. the topics and language being used is consistent with the context of discussion within which it’s introduced, for example according to a NIEM namespace).

"S-CORE" for "Social Media Core Information"?

As an example, the tweet associated with this blog entry is in fact an “information package”, albeit made up of unstructured data (as far as I can manipulate). Within the Twitter universe, there is an association (or “assertion”) of accountability metadata with this Tweet, so long as you’re a member of the community and can view my profile data, link references and prevalent topical themes. My profile data is associated with an employer and other communities, which themselves provide additional accountability data. But what happens when the Blog Tweet is Twitterfed, gets Tarpiped into Identica, over to Friendfeed, into Facebook and finally its RSS feed repurposed as a discussion item on someone else’s blog widget?

The original metadata isn’t carried along, and therefore some degree of manual intervention may be required to respond to non-attributed, out-of-context or otherwise mis-purposed data. Google searches may return search results containing my tweet language in non-intended contexts, thereby possibly enabling alternative or even incorrect interpretation. A homeland security social media “tweet”, for example, from a first-responder regarding a health-related assessment may be determined by HHS as inaccurate and possibly dangerous as most obviously interpreted by the public. Enter “Online Reputation Management”.

Online reputation management is a significant industry in itself (many local Washington DC Internet Marketing companies provide it), focused on making sure search engine results aren’t creating or promoting a false or unwarranted image – of a person, company, or product – because of overwhelming yet unverified online information posted to the contrary. Back to the HHS example – the erroneous tweet works its way into Google search results via multiple channels (and perhaps aggregate or federated search results), and subsequently becomes “the truth” because it’s on the first page of results. HHS or some other responsible, validating entity must now engage reputation management techniques to deliver more, better or different information into many of the same social media channels, in addition to a couple of its own highly-authoritative channels, to counter the ultimately false search engine results.

This may be part of the reason that Data.gov is so far a producer only of raw data, vs. “information” – since information carries with it expectations and actually delivers a degree of unstructured accountability that’s very hard to define, manage and monitor on the Internet. Structured data is far easier to manage, since it typically isn’t shared via social media (at least with its structure intact), and structured metadata is easily embedded. Citizens can certainly help resolve semantic inconsistencies, can establish some level of “social trust” by using the data, and can prove usefulness (and therefore legitimacy) by creating popular applications – but citizens aren’t really accountable to the rest of the Federal Government’s constituency, and its reputation. Ultimately, organizations will look back to the Government for trust and accountability with respect to information packages (vs. data) they can use in legitimate business ventures involving social media.

However, the Federal Government’s foray into producing information packages for consumption by public social media will likely be constrained for some to come, until industry can come up with a generally accepted standard and technology examples to permanently associate “accountability metadata” to unstructured information payloads released in the wild. This might then be followed by an oversight agency or program that could automate perhaps some of the Federal Reputation Management tasks that would then be necessary – enabling many more useful, unstructured conversations in public social media, moderated by trusted Government sources. Perhaps from the cloud.

Thursday, June 11, 2009

Probably Interesting Information

(Warning – hyper-theoretical stream of probably uninformed semi-consciousness to follow…)

In considering the possible types of information that might need to included in Enterprise Information-Sharing programs, while sifting through my TweetDeck, something’s become quite clear – most of our Government information-sharing exercises are all about “Signals”, “Data”, and “Information” that are already known (at some level) to be “required”, “useful”, or “possibly interesting” – judged as so by existing processes, policies, roles, business rules and perhaps knowledgebase ontologies.

I’d like to, however, receive and share more information from the government that’s “probably interesting”, but that hasn’t yet been confirmed as so by them, or me and my community.

This happens all the time in social media, where I’ve subscribed to or participate in an information-exchange forum based on a particular knowledge context (and within my own agenda), and routinely view information that’s been posted within this context for unintentional use….i.e. it “is” interesting to the poster, who, by virtue of their understanding of community context, assumes therefore that it’s “probably” interesting to others in the community. Not “possibly” (which if so, would be posted to a non-specific public forum), but “probably”. Destined for “unintended, though probable use”.

Over at data.gov, some “possibly interesting” information is being made available for further consumption and mashing – but some thought was already applied to determining its likely level of usefulness, constrained by security requirements, information-sharing policies and the role descriptions of the posters. Therefore, a lot of it isn’t interesting, or probably interesting, at all.

Now, if a mashup application had relatively unfettered access to a variety of government data sources, from which it developed a “knowledge map”, and could semantically compare this map to a map of my own personal knowledgebase (i.e. my blog, my articles, my social media conversations, things I like to read, favorite books, etc.), and then react (perhaps with some basic guidance from me) to contextual “information-sharing events” (i.e. the arrival or transformation of certain information) with intelligent alerts that some “probably interesting information” were available – now that would be something.

Kind-of like when my 5-year old arrives home from preschool, with loads of “probably interesting information” about other families, teachers, etc. Much better than gossip.

Friday, June 5, 2009

Information-Sharing with Cloud Semantic Ontologies

Quite a mouthful, the title of this post. However, this language is becoming more and more critical to the objective of cross-domain information-sharing, and is becoming more and more easy to actually use (by the public!) in building information search, discovery, fusion and correlation/analytical applications.

A while ago, I was introduced to the semantic wiki technologies and communities of Knoodl (knoodl.com), which is a mechanism for like-minded persons to collaborate on building semantic vocabularies, using open standards like RDF and OWL. Basically describing the terms and language of a topic area in a manner that can be expressed, via XML, for consumption by computer applications. So when a message arrives in your system with information labeled "SAR", or a query is sent forth with the same acronym, a test of this word against the machine-readable vocabulary can determine what the likely meaning really is - "suspicious activity report", "suspect action report", "search and rescue", "specific absorption rate", etc.

Knoodl (by Revelytix, Inc) truly enables bottom-up "crowd-sourcing" of vocabularies within specific domains, from agriculture to military and homeland security - and it's now available as a free, cloud-sourced (Amazon's EC2) application with hooks for automated applications to use. What's most helpful, is that this vocabulary-building environment allows business, mission and technologists to create great machine-readable ontologies/knowledgebases, without having to actually use programming languages or edit XML. The vocabularies created or uploaded are immediately accessible through a query standard called "SPARQL", with full support for Knoodl's role-based permission model.

More barriers are now pretty much gone for forward-thinking agencies to collaboratively describe their data and information, expose (the vocabulary) it in a manner that enables accurate representation and description within a domain context, and leverage a "Web 2.0" semantic technology platform for free in conjunction with the rapidly-growing number of data query and mashup technologies already under way (like data.gov). Some very forward-thinking technology vendors are already leveraging this technology into their "Operational Intelligence" cloud-based platforms, such as Vitria's M3O Web 2.0 BPM suite...and we all understand that business entity definition and business process identification/management is at the heart of most successful SOA implementations.