From the Intranet to the Enterprise Knowledge Network
by Phillip Rhodes
Posted on Monday August 31, 2015 at 06:20PM in Technology
Beginning from the mid 1990's, companies have been using Web technologies (HTTP, HTML, CSS, etc.) to build internal webs for knowledge sharing and collaboration. The term "Intranet" has been adopted to describe these internal knowledge sharing systems, and Intranets have become ubiquitous in the years since.
But if the "Intranet" was the application of World Wide Web technologies inside organizations, we think it's time to start talking about the use of Semantic Web technologies inside of companies. At Fogbeam Labs we are referring to this approach as the Enterprise Knowledge Network (EKN).
Of course, Semantic Web technologies have been around for some time now, so why is it now time to start applying them inside the enterprise? We believe the time is now due to the confluence of a number of related factors:
- Simply put, the technology has gotten better, and we understand it better. We, as a technology community, now understand how to develop and apply "SemWeb" tech more effectively than we did 10 years ago. And, the tools have gotten better, the standards have gotten better, we have more experience.
- Open Source Software has made the technology radically more accessible. In years past, assembling a high-value SemWeb solution meant using expensive, proprietary software. But in 2015, everything you need to deploy SemWeb tech, and build an Enterprise Knowledge Network, is available as OSS or Free Software. Of course commercial vendors like Fogbeam Labs offer support and services for a fee, but the overall cost for this level of technology solution has plummeted. And, using Open Source Software is a better value proposition in many other regards anyway. Open Source is even shaping the future at Microsoft now.
- More data. The burgeoning interest in "Open Data" over the past few years has resulted in an explosion of available data, especially from government sources. At the same time, projects like DBPedia and WikiData are hard at working making the content from Wikipedia available as part of the Semantic Web. And the Linking Open Data initiative catalogs a ridiculously large number of datasets which are now available as semantic data. This data, combined with your internal data, allows for unprecedented opportunities to mine for new insights and opportunities.
- Cheaper, faster computers, and cloud computing. - The simple truth is, using SemWeb tech takes a lot of computing "horsepower". And 10 years ago, that much horsepower was either not available, or was prohibitively expensive. Now, thanks to Moore's Law and the advent of IaaS providers like AWS, it is possible to deploy massive computing resources at reasonable prices.
In short, there's really no reason to delay moving from an Intranet, to an Enterprise Knowledge Network. Now is the time to take advantage of Semantic Web technology to integrate all of the knowledge spread across your enterprise, making the right information and knowledge available to the people who need it, when they need it - sometimes even before they know they need it. An Enterprise Knowledge Network unifies all of the disparate repositories you have in your organization - Document Management servers, wikis, blogs, shared folders, databases, and applications, and lets you navigate through the knowledge-space of your firm, quickly and easily.
For more information on how you can move from an old-fashioned Intranet, to an Enterprise Knowledge Network, consult these two Fogbeam papers. If you have questions about how to move forward, contact Fogbeam Labs and let us help.
Oracle explain exactly why you should only use Open Source software
by Phillip Rhodes
Posted on Wednesday August 12, 2015 at 03:46PM in Technology
Unless you have been living under a rock for the past few days, you are probably aware of the (in)famous "No, You Really Can't" blog post from Oracle. The post sparked a firestorm of controversy with its ranting about the reasons Oracle customers can't probe Oracle products for security vulnerabilities. The original post has been deleted, but a copy of the post can still be found at the Internet Archive.
In the post, the author makes the following interesting comment:
Even if you want to have reasonable certainty that suppliers take reasonable care in how they build their products – and there is so much more to assurance than running a scanning tool - there are a lot of things a customer can do like, gosh, actually talking to suppliers about their assurance programs or checking certifications for products for which there are Good Housekeeping seals for (or “good code” seals) like Common Criteria certifications or FIPS-140 certifications. Most vendors – at least, most of the large-ish ones I know – have fairly robust assurance programs now (we know this because we all compare notes at conferences). That’s all well and good, is appropriate customer due diligence and stops well short of “hey, I think I will do the vendor’s job for him/her/it and look for problems in source code myself,” even though:
- A customer can’t analyze the code to see whether there is a control that prevents the attack the scanning tool is screaming about (which is most likely a false positive)
- A customer can’t produce a patch for the problem – only the vendor can do that
- A customer is almost certainly violating the license agreement by using a tool that does static analysis (which operates against source code)
Now what's interesting here is this: the three bulleted items above are three very precise and accurate reasons why you should stop using closed source software!. I suppose the author of this piece thought they were being cute or glib by insulting their customers. Instead they laid out - in precise detail - exactly why their customers should drop Oracle products and switch to Open Source solutions. Because, with OSS:
- The customer CAN analyze the source code as part of their security audit process, and compare the actual code with the results from scanning tools
- The customer CAN create their own patch, and test it, and - since they probably don't want to maintain a forked version indefinitely - contribute it back upstream, where it benefits the entire community.
- The customer is NOT violating the license agreement by running static analysis tools (or, indeed, an other tool) against the code.
Of course the exact details of what you can and can't do with OSS code varies according to the specific license in use. In our case here at Fogbeam, we're proud to say that almost everything we do is licensed under the Apache License v2 - a very "business friendly", permissive license that gives you, the customer, tremendous freedom and security.
Let me end this by saying "Thank You, Oracle. Thank you for helping explain to the world, why they should quit using your proprietary, closed-source, business-hostile products, and switch to Open Source instead."
Is Facebook at Work an Enterprise Social Network?
by Phillip Rhodes
Posted on Wednesday January 14, 2015 at 08:12PM in Technology
According to numerous media reports, Facebook have just (soft) launched their new business oriented "Facebook at Work" application. Given Facebook's ubiquity in the consumer social network space, the Facebook@Work announcement was met with great fanfare, with most observers suggesting that the new offering is aimed at competing with products like Yammer, Hipchat, Slack, Salesforce Chatter and others of that ilk. This raises a number of interesting questions, at least some of which we can't fully answer yet, as Facebook@Work is still in a "closed" beta at the moment. But we can take a look at some of the issues around this new product, and its implications for the business oriented social network space.
A few important questions stand out to us, including the topics already raised by some media pundits, like "can you trust Facebook with your data?" and "how will Facebook monetize this?" These are valid concerns, and I'm sure they will be addressed in time. But right now, I'd like to start by asking perhaps the most pertinent question of all:
"Is Facebook at Work really an Enterprise Social Network?"
At first blush this may seem like a ridiculous question - you may think "Well, Facebook is a social network, and if it's aimed at business then of course it's an Enterprise Social Network." But this is a superficial and possibly inaccurate analysis. I think we can safely say that Facebook@Work is definitively a business social network, but whether or not it's an enterprise social network is a different question altogether.
To explore this in more detail, let's talk about what constitutes an "enterprise" or what qualifies as enterprise software. You could argue that any business is an "enterprise" by some definitions, but in terms of the technology industry, "enterprise" has more specific connotations. In the tech software industry vernacular, "enterprise" generally refers to companies that are large or complex enough to have very specific demands on their software systems, and "enterprise software" is software which is designed and built specifically to serve the needs of those firms.
Enterprise software typically has to meet specific requirements in terms of reliability, interoperability, extensibility, and the other "ilities" as people often call them. When a complex firm embeds business logic, which constitutes some part of their competitive advantage, into a software system, the system has to be tailored to the exact specifications of that customer, or it provides no real advantage at all. Likewise, software that does not meet minimum thresholds for uptime, response time, ability to integrate with other enterprise systems, etc., is often not suited for enterprise use. Examples of required integration points may include Customer Relationship Management (CRM) systems, Sales Force Automation (SFA) systems, Enterprise Resource Planning (ERP) platforms, Business Process Management (BPM) products, etc.
Given that, what can we say about Facebook@Work? Well, details are somewhat lacking at the moment, but what has been reported to date suggests that Facebook@Work lacks any integration capabilities, including even the standard Facebook developer API. And there is certainly nothing to suggest any purpose built integration points for connecting to CRM, SFA, ERP, or BPM systems. It also appears unlikely that Facebook@Work will support any kind of customization to speak of. It seems that this will be strictly a hosted offering, run by Facebook themselves, and not a product where customers will have access to the source code, with the ability to run a modified version. In terms of reliability, however, Facebook@Work should do well, assuming it inherits the same support staff and procedures that the consumer Facebook is backed by. When you look at it, Facebook is very reliable in general, given the massive amount of traffic the site serves.
Based on what we know so far, I'd hesitate to call Facebook@Work an "enterprise" product. I think it will serve well as a replacement for Hipchat, Slack, Yammer and similar tools in the SMB space.. Companies up to 250 employees or thereabouts, with limited needs to customize or integrate their software, will possibly find Facebook@Work very useful.
On the other hand, we believe that firms much larger than about 250 employees - and certainly those with more than 500 employees - will have needs that will not be served by Facebook@Work. This is, of course, based only on the information available today.
For firms that need a social platform which was purpose built for enterprise scenarios, and features API support for ActivityStrea.ms, FOAF, BPM integration, business events (SOA/ESB integration), and which is completely customizable, we suggest taking a look at our Open Source Enterprise Social Network offering, Quoddy. In addition to strong API support and a business friendly Apache License, Quoddy includes support for serving as a cornerstone of a Enterprise Knowledge Network by including support for Semantic Web technologies and pre-built integration with Apache Stanbol for semantic concept extraction and content enhancement.
At the end of the day, Facebook@Work is an exciting development, and we believe it will serve the needs of many - but not all - business customers. Luckily there are a large number of choices in the enterprise software space, with solutions available to fit all types of firms. Whether a firm adopts Facebook@Work, Yammer, Quoddy, Slack, or "other" we firmly believe that Enterprise Social Software is going to serve as an important channel for business collaboration and knowledge transfer for the foreseeable future.
If you read this far, you should Follow us on Twitter
Come Meet Fogbeam Labs!
by Phillip Rhodes
Posted on Saturday September 13, 2014 at 05:19PM in Technology
So, you've been following our blog, reading our tweets, friend'ing us on Facebook, reading our posts on Hacker News, and you've circled us on Google+, LinkedIn with us, and probably even driven by our homes a few times. Wouldn't you like to stop stalking from afar and actually, you know, come meet us??
Guess what? Now's your time! We will be demo'ing in the "demo room" at this year's CED Tech Venture Conference in Raleigh, North Carolina, Tuesday and Wednesday of next week (that's Sept 16 and 17, 2014). And even better, you'll have not one, but two great chances to meet us in October! Phil will be presenting at the 2014 All Things Open conference in Raleigh, which happens October 22-23, 2014. Phil will also be speaking at the Triangle Java User's Group meeting in October, on the evening of October 20th.
Of course all three of these events would be amazing even if we weren't there, and we encourage you to attend all three, even if you have never heard of Fogbeam Labs. But if you do visit, be sure to find us and say hi!
Starting Points For Learning About Open Source
by Phillip Rhodes
Posted on Wednesday August 27, 2014 at 02:44PM in Technology
I (Phil) have recently been asked to speak on a panel discussing Open Source Software and issues regarding intellectual property, OSS licensing, patents, and how recent changes have affected the Open Source world, etc. This makes sense, given that everything we do at Fogbeam Labs is Open Source, and we make participating in the OSS community part of our mission and core values. But I'm no legal expert, and there's plenty I don't know about the legal issues in this sphere, and there are licenses that I don't know much about (esp. the lesser used ones). So I decided to do some "boning up" on the topic in advance, and remembered that there are quite a few resources dedicated to this topic, which are themselves "open source" (or at least freely available).
So, I thought I'd throw together a list quickly, which may be useful to anyone who wants to get an overview of what this "Open Source" thing is all about, or who wants to deepen their understanding of OSS licenses and related topics.
First, we have the absolutely classic The Cathedral and the Bazaar by Eric S. Raymond. This book deals with the fundamental dichotomy between how software is produced in the decentralized, distributed "Open Source" model, and how it is produced in a rigid, top-down, bureaucratic organization (like most software companies). Note that the linked page includes both the text of the book (including foreign language translations) and comments by the author and links to other discussions and comments by other observers.
Fundamentally, if you want to understand the Open Source world and the mindset of the people who populate it, this is required reading. No, not everybody agrees with everything esr has to say, and yes, this book is somewhat dated now. But it has been so amazingly influential that it's become part of the very fabric of this movement.
Next up we have Understanding Open Source and Free Software Licensing by Andrew M. St. Laurent. This book focuses specifically on OSS and Free Software licenses, and includes a comprehensive analysis / explanation of all of the important and widely used licenses that you will encounter. If you have ever wondered "what do the mean when they say that the GPL is 'viral'" or "what's the problem with mixing code that's released under different licenses" or something similar, this is your book. It's not a law textbook, but it covers the legalities and legal implications of OSS licensing for laymen quite well.
Another excellent title covering the legal nuts and bolts of Open Source licensing is Open Source Licensing: Software Freedom and Intellectual Property Law by Lawrence Rosen. Rosen has been a high profile participant in legal aspects of Open Source for years, and has written a great book to help people understand the interaction of law and software. This book and the aforementioned Understanding Open Source and Free Software Licensing collectively cover pretty much everything you could want to know about licensing and legal issues (to the extent that such a thing is possible. There is still a lack of case-law and legal clarity in certain areas).
Another excellent book, especially for those leading - or who would lead - Open Source projects is Karl Fogel's Producing Open Source Software: How to Run a Successful Free Software Project or "Producing OSS" as it's known. "Producing OSS" covers the nuts and bolts of running an Open Source project and actually shipping software. Surprisingly (or perhaps not so surprisingly) there is a lot more to running a successful project than dealing with code and tech issues. Karl's book deals with the various "soft" issues that projects face - dealing with volunteers, creating a meritocracy, understanding how money affects the project, etc. I highly recommend this book to anyone who is, or wants to be, an active participant in any Open Source community.
And last, but certainly not least, we have the Architecture of Open Source Applications series. In these two books, the creators of dozens of popular Open Source projects explain the inner workings of their projects, and reveal the architectural details that made them successful. If you value learning via emulation, this is an amazing series of case studies to learn from.
And there you have it folks - a virtual cornucopia of Open Source wisdom collected over the years. If you have ever wanted to develop a solid understanding of how Open Source works and what it's all about, this is a great place to kick off your journey. And, of course, feel free to post any questions or comments here.
Why We Don't Want To Be "The Next Red Hat"
by Phillip Rhodes
Posted on Thursday February 13, 2014 at 10:54PM in Technology
Earlier today I read an interesting article at Tech Crunch by Peter Levine, in which he asserts that "there will never be another Red Hat" and more or less lambasts the notion of a company based on Open Source.
We are a company based on Open Source.
So, I guess my first thought should have been "Oh, shit. We're doing this all wrong. Let's yank all of our repositories off of GitHub and close everything immediately."
Yeah.... no.
The truth is, Peter makes an interesting point or two in his article, and some of what he says at the end is moderately insightful. In fact, it reflects some decisions we made a few months ago about how we're going to position some new product offerings in 2014. But nothing in his article really provides any support for the idea that there is one, and only one, successful "Open Source Company".
OK, to be fair, I'll take his word that Red Hat is the only public company who's primary foundation is Open Source. But I'll counter that by pointing out that "going public" is not the sole measure of success for a firm. I'll also grant you that even Red Hat, seemingly the most successful "Open Source company" to date, is much smaller than Microsoft, Oracle, and Amazon.com
Guess what? Almost every company is much smaller than Microsoft, Oracle and Amazon.com. Comparing a company to those outliers is hardly damning them. Truth is, RH is an $11 billion company - nothing to sneeze at. And yes, we have been known, on occasion, to use the phrase "the next Red Hat" when trying to describe to people what we're out to do here at Fogbeam.
Let's look at something else while we're at it... Red Hat are hardly the only successful Open Source company in the world anyway. They are probably the biggest and the most well known, but stop and consider a few other names you may have heard: Alfreco, Jaspersoft, Bonitasoft, SugarCRM, Cloudera, Hortonworks, Pivotal, Pentaho... Yeah, you get the drift.
And then there's this jewel of a quote from the article: If you’re lucky and have a super-successful open source project, maybe a large company will pay you a few bucks for one-time support, or ask you to build a “shim” or a “foo” or a “bar.”
Unless I'm misinterpreting Peter here, he seems to be suggesting that companies do not want to, or are not willing to, pay for support for the Open Source solutions they use. All I can say is that this does not match my experience at all. Oh, don't get me wrong... there will always be some percentage of "freeloaders" who use the OSS code and never buy a support subscription. Red Hat know that, and we know that. But what we also know is that most businesses that are using a product for a mission critical purpose want a vendor behind the product, and they are willing to pay for that (as long as the value is there). The fact is, companies want to know that if a system breaks, there is somebody to call who will provide support with an SLA. They want to know that if they need training, there is somebody to call to provide that training. They want to know that if professional services are need for integrations or customizations, that there is somebody that they can call, who knows their shit. And, more prosaically, they want to know that there is a company there to sue if the shit really hits the fan.
So when I read Peter's article, I really don't hear a strong argument that there can't be other successful Open Source companies. In fact, I can't help but think that all he's really saying is "It's hard to build an Open Source company that will generate returns at a scale, and in a timeframe, that's compatible with the goals of Andreesen-Horowitz." And that's a perfectly fine thing to say. Maybe an Open Source company would be a bad investment for A16Z. But that isn't even close to the same thing as suggesting that you can't be successful using Open Source - if your goals and success criteria are different.
Anyway, as far as the whole "next Red Hat" thing goes - the thing is, we don't actually aspire to be "the next Red Hat". We've just used that term because it's a simplification and it's illustrative. But as far as aspirations for where we are going? Nah... In fact, here's the thing. We aren't out to be "the next Microsoft" either. Or "the next IBM". or "the next Oracle" or "the next Amazon.com" and so on and so on, ad infinitum.
No, fuck all that. Our aspirations are far bigger than that. Wait, did I say "bigger"? Maybe I really just meant "different". Bigger isn't always better, and there are other ways to distinguish yourself besides size. Will be be an $11 billion dollar company one day? I don't know. Maybe we'll actually be a $221 billion dollar company. Maybe we'll be a $2 million dollar company. Maybe we'll never make a dime at all.
What I do know is that our plan is this: We are working to build a company that is so fucking awesome that in a few years, people doing startups will go to people and say "We plan to be the next Fogbeam Labs"...
On Solving The Social Aspect Of BPM
by Phillip Rhodes
Posted on Thursday February 06, 2014 at 01:51PM in Technology
Over at BPM.com forums Peter Schooff has posed a very interesting question: "What Is the Key to Solving the Social Aspect of BPM?" This is a topic we've thought a lot about, and "social BPM" is very core to use here at Fogbeam Labs, so I wanted to take a moment and share some thoughts on this very important topic.
The discussion here is focused around this factoid from a recent Aberdeen survey:
Thirty-four percent (34%) of respondents in Aberdeen’s Solving Collaboration Challenges with Social ERP indicated that they have difficulty converting collaborative data into business execution. This is unnerving because, for many processes, the ability for people working together collaboratively is essential for process effectiveness.
To really understand this, you have to consider what exactly the collaborative aspects of a BPM process are. And, in truth, many processes (perhaps most) are inherently collaborative, even if the collaborative aspect is not explicitly encoded into a BPMN2 diagram. Think of any time you've been involved in a process of some sort (whether BPM software or workflow engines were involved or not) and you have to make a decision or take some action... and you needed information or input from someone else first. If you picked up a telephone and made a call, or sent an email or an IM, then you are doing "social BPM" whether you use the term or not.
The first factors then, in really taking advantage of collaboration in BPM, are the exact same things involved in fostering collaboration in any fashion. It's not really a technology issue, it's an issue of culture, organization design, and incentives. Do people in your organization fundamentally trust each other? Is information shared widely or hoarded? Does the DNA of your firm encourage intra-firm competition between staff members, or widespread collaboration which puts the good of the firm first? Sadly, in too many firms the culture is simply inherently not collaborative, and nothing you do in terms of BPM process design, or deploying of "enterprise social software" or BPM technology is going to fix your broken culture.
Next, we have to look at these question: Does your firm actually empower individual employees to make decisions and use their judgment? Can an employee deviate from the process? No? Well what if the process is broken? Can your staff "route around" badly designed process steps, involve other people as necessary, inject new information, reroute tasks and otherwise take initiative? If the answers to most or all of these questions are "no" then you aren't going to have collaborative processes. If your organization is a rigid, top-down hierarchy that embraces a strict "command and control" philosophy, you're never going to get optimal effect from encouraging people to collaborate on BPM processes - or anything else.
It's only once you have the cultural and structural issues taken care of that technology even comes into play. Can some BPM software do more than others to encourage and facilitate social collaboration? Absolutely. That's why we are developing our Social BPM offering with specific capabilities that help cultivate knowledge sharing and collaboration. Using semantic web technology to tie context to tasks and content (where "context" includes things like "Bob in France is the expert on this topic and here's his contact info"), and exploiting "weak ties" and Social Network Analysis to provide suggested sources for consultation, are crucial technical capabilities for making BPM more "social". Additionally, if you have the cultural and structural alignment in place to really foster collaboration and knowledge sharing, then enterprise social software are amazingly powerful tools for cultivating knowledge transfer, fostering engagement, and driving alignment throughout your organization.
Done well, combining social software and BPM can provide tremendous benefits. But no technology is going to help if your culture is wrong. If you're having trouble with collaboration, I strongly encourage you to examine the "soft" issues before you spend a dime on additional technological tooling.
Dominiek ter Heide is Dead Wrong. The Semantic Web Has Not "Failed"
by Phillip Rhodes
Posted on Wednesday November 13, 2013 at 05:50PM in Technology
There is an interesting article at Gigaom right now, by Dominiek ter Heide of Bottlenose in which the author asserts that the Semantic Web has failed, and purports to give the three reasons why it has failed.
This is, of course, utter bullocks. I want to take this opportunity to explain why and provide the counterpoint to Dominiek's piece.
For starters, there is simply no legitimate basis for saying that "the Semantic Web has failed" to begin with. Given that his initial assertion is flat out wrong, there's almost no reason for a point-by-point rebuttal to the rest of his piece, but we'll work our way through it anyway, as the process may be educational.
So, if I'm going to say that the Semantic Web has not failed, then how might I substantiate or justify that claim? OK, easy enough... you probably use the Semantic Web every. single. day. And so do most of your friends. You just don't know it. And that is kind of the point. The Semantic Web isn't something that's really meant for end users to interact with directly. The essence of the Semantic Web is to enable machine readable data with explicitly defined semantics. Doing that allows the machines to do a better job of helping the humans do whatever it is they are trying to do. A typical user could easily use an application backed by the Semantic Web without ever knowing about it.
And here's the thing - they do. I said before that you probably use the Semantic Web every day. You might have thought "Yeah, right Phil, no way do I use anything like that". Well, if you use Google [1][3], Yahoo[2][3], or Bing[3], then guess what - you're using the Semantic Web. Have you seen those Google Rich Snippets around things like results for restaurants, etc.? That is powered by the Semantic Web. Aside: For the sake of this article, I treat RDFa, Microdata, Microformats, RDF/XML, JSON-LD, etc., as being functionally equivalent, as the distinction is not relevant to the overall point I'm making.
I could stop here and say that we've already proven that Dominiek ter Heide is wrong, but let's dig a little deeper.
The first reason that Dominiek gives reduces to an argument that everything on the Semantic Web is "obsolete knowledge" or Obsoledge.
This has the effect of making the shelf-life of knowledge shorter and shorter. Alvin Toffler has – in his seminal book Revolutionary Wealth – coined the term Obsoledge to refer to this increase of obsolete knowledge.This is simply a factually incorrect view of the Semantic Web. Again, the goal of the Semantic Web is to provide machine readable, defined semantics along with data on the web. It does not matter one bit if that data is as old as a reference to Leonardo Da Vinci or as recent as a reference to last night's episode of Grimm. The Semantic Web is just as relevant to the kind of up-to-date, trending data that Dominiek seems so obsessed with, as it is with "historical" data. And let also point out that history remains amazingly important - as the old saw goes "Those who fail to learn from the past are doomed to repeat it". To suggest that knowledge lose all value simply because it is old is simply absurd.
If we want to create a web of data we need to expand our definition of knowledge to go beyond obsolete knowledge and geeky factoids. I really don’t care what Leonardo DaVinci’s height was or which Nobel prize winners were born before 1945. I care about how other people feel about last night’s Breaking Bad series finale.
His second argument simply states that "Documents are dead". I could just point out that both this blog post, which you are currently reading, Faithful Reader, as well as his own article at Gigaom, are both "documents". You do the math.
It goes deeper than that, however. His argument, again, fails for extremely obvious reasons which betray a total misunderstanding of the Semantic Web and the state of the Web in general. His argument is that "now" data is encapsulated in tweets and other "streaming", social-media, real-time data sources. While it is a fair point that more and more data is being passed around in tweets and their ilk, the factually incorrect part is to claim that those sources are not valid components of the Semantic Web just like everything else on the web. Case in point: One of our products here at Fogbeam Labs (Neddick), consumes data from all of: RSS feeds, IMAP email accounts, AND Twitter, and performs semantic concept extraction on all of those various data sources (and more are coming, including G+, Facebook, LinkedIn, etc.) and we can find the connections between, say, a Tweet and a related blog post! That's the power of the Semantic Web, and the point that Mr. ter Heide seems to be missing.
His final argument is that "Information should be pushed, not pulled". Again, this betrays a complete misunderstanding of the Semantic Web. The knowledge extracted from Semantic Web sources can be used in either "push" or "pull" modalities. Again, one of our products can leverage Semantic Web data to generate real-time alerts using Email, XMPP, or HTTP POST, based on identifying a relevant bit of knowledge in a piece of content - whether that piece of content is a Tweet, a real-time Business Event extracted from a SOA/ESB backbone, or a Blog post.
Nearing the end of this piece, let me just say that the Semantic Web is becoming more and more important with every passing day. As tools like Apache Stanbol for automating the process of extracting rich semantics from unstructured data mature and become better and more widely available, the number of applications for explicit semantics is just going to mushroom.
To finish up, let's look at a quick example of what I'm talking about... let's say you have deployed our Enterprise Social Network - Quoddy and your company does something with musicians. Your Quoddy status update messages occasionally mention, say, Jon Bon Jovi, Bob Marley, Richard Marx, and Madonna. How would you do a search without SW tech that says "show me all posts that mention musicians"? Not gonna happen. But by using Stanbol for semantic extraction and storing that knowledge in a triplestore, we can make that kind of query trivial.
It gets better though... Stanbol comes "out of the box" with the ability to dereference entities that are in DBPedia and other knowledge bases, which is cool enough in it's own right... but you can also easily add local knowledge and your own custom enhancement engines. So now entities that are meaningful only in your local domain (part numbers, SKUs, customer numbers, employee ID numbers, whatever) can be semantically interlinked and queried as part of the overall knowledge graph.
Hell, I'd go so far as to say that Apache Stanbol (along with Apache OpenNLP and a few related projects... Apache UIMA, Apache Clerezza, and Apache Marmotta, etc.) might just be the most important open source project around right now. And nobody has heard of it. Again, the Semantic Web is largely not something that the average end user needs to know or think about. But they'll benefit from the capabilities that semantic tech brings to the table.
At the end of the day, the Semantic Web is just a step on the road to having something like the Star Trek Computer or a widely available and ubiquitous IBM Watson. Saying that the Semantic Web has failed is to ignore all of the facts and deny reality.
Fogbeam Status Update - September 2013
by Phillip Rhodes
Posted on Wednesday September 25, 2013 at 08:04PM in Technology
Dear Friends of Fogbeam:
Just to be clear, no, we are not about to be acquired by LinkedIn. But I'll come back to why I say that, in a few moments.
On to the news and important stuff. It's been a lot longer than normal since our last status update email. If you follow the writings of Paul Graham, you may recall his famous "How Not To Die" essay[1], where he talks about how startups usually succeed if they can just avoid dying long enough. In this essay, he makes another interesting point, in these lines:
For us the main indication of impending doom is when we don't hear from you. When we haven't heard from, or about, a startup for a couple months, that's a bad sign. If we send them an email asking what's up, and they don't reply, that's a really bad sign. So far that is a 100% accurate predictor of death. Whereas if a startup regularly does new deals and releases and either sends us mail or shows up at YC events, they're probably going to live.
Given that, you might wonder if you should take it as a bad sign that we haven't emailed you in some time. As it happens, nothing could be further from the truth. While we haven't been sending a lot of emails, we have been blogging[2], tweeting[3], sharing content on Facebook and Google+, etc. But, far, far more importantly than all of that, is that we've been heads down, grinding away, working on moving things forward.
As a result of that hard work, we were recently able to proudly announce three new project releases[4], including our first every "simultaneous release" of three components of the Fogcutter project. We also launched our brand new website at http://www.fogbeam.com at the same time. We now consider our Enterprise Social Network, Quoddy, and our Information Discovery Platform, Neddick, as being in Limited Availability status. This means we have two products available for sale, with the caveat that we are only looking to make sales to customers that fit certain criteria, and who will engage with us in a "co creation" scenario as we move towards a "GA" release.
We have also been hard at work in terms of market research, and have chosen a target market to pursue as a "beach-head market" and have identified approximately 160 companies in North Carolina that we will be attempting to gain access to, and hopefully land those first few alpha customers. Also on the sales and marketing front, we are starting to see results from our content marketing strategy and are receiving inbound leads via email and Twitter.
Things have not been "sunshine and roses" since last time however. Sadly, one member of our founding team, Robert Fischer, chose to step down, due to issues in his personal life. We won't get into details out of respect for his privacy, but he had external situations that were imposing a great deal of stress on him, and left him feeling that he was not able to contribute to the level he would want. We certainly will (and do) miss Robert, but we continue to soldier on, despite this setback.
On the other hand, we are fortunate to be able to announce a new member of our team, Eric Stone. While not a "replacement" for Robert per-se, Eric brings our team back to three, and adds another wicked smart member who is going to be a tremendous asset for us. Eric received his Computer Science degree from UNC Chapel Hill, and is currently pursuing graduate studies in Statistics & Operations Research, also at UNC-CH. Eric interned with us this summer, and did such a bang-up job that we asked him to stay on as a permanent member of the team.
The other adversity we had to fight in 2012 was a serious health issue that I (Phil) encountered, when I was initially diagnosed as diabetic. Prior to being diagnosed, my blood sugar reached a level that caused a potentially fatal condition known as DKA, and left me in the hospital for three days, almost exactly one year ago. Thankfully the condition is very survivable with modern medical technology, and I'm still here and kicking. My diabetes is now well controlled and life is back to normal (or what passes for normal for a startup founder).
All of that said, let's get back to why we mentioned LinkedIn earlier on. This is a reference to a recent article[5] that appeared in San Jose Business Journal, titled The Companies LinkedIn Should Buy With Its $1B Cash Infusion. In this piece, SJBJ listed Fogbeam Labs as one of their suggested purchases for LI. Now, as we said, we don't actually expect LinkedIn to come calling wanting to acquire us anytime soon. And, truth be told, we probably don't *want* to be acquired this early, as the valuation we would receive right now would not come close to meeting our expectations and goals (just to be clear, we plan on building a company here that can go public with a multi billion dollar valuation). This mention is notable however, as it demonstrates that people as far away as Silicon Valley are aware of what we're doing, and are paying some attention to us. And this despite the fact that we really haven't done any publicity or PR work that was targeted specifically at the West Coast.
So, to wrap this up: We are making great progress on the product front, we are receiving some recognition from media as far away as Silicon Valley, we have overcome some serious adversity, and we refuse to die - in more ways than one! As 2013 draws to a close, our focus starts to shift to engaging with our chosen "beach-head market" and trying to generate some initial revenue and clarify our short-term product roadmap.
- [1]: http://www.paulgraham.com/die.html
- [2]: http://fogbeam.blogspot.com/
- [3]: https://twitter.com/FogbeamLabs
- [4]: http://fogbeam.com/news.html
- [5]: http://www.bizjournals.com/sanjose/news/2013/09/04/companies-linkedin-should-buy.html?page=1
Thanks for listening, and please feel free to ping us with any questions or comments.
Phil, Sarah and Eric
Fogbeam Labs
Social, Events, BPM... oh my! But what about Knowledge and Context?
by Phillip Rhodes
Posted on Tuesday May 28, 2013 at 03:55PM in Technology
There is a very good article at ZDNet which speaks to the importance of the "trinity" of event driven architectures, social software and BPM. And while the basic point is sound (all of those technologies certainly are more valuable when integrated and used together) this article leaves out an important element: Knowledge.
Integrating Social software with BPM and an Event based architecture is, of course, part of what we are giving you the power to do with Quoddy, our open source Enterprise Social Network product. But we believe you need to go beyond providing a social front-end for subscribing to, sharing, discussing and acting on business events and tasks... you need to provide the context and knowledge that exists within the firm, and outside the walls of the firm, that support decision making. And that's what we are developing with Quoddy and the rest of our Fogcutter Suite of products. All of the pieces aren't quite finished yet, but we are evolving a system which will allow you to subscribe to, for example, business events from your ESB/SOA infrastructure, render relevant events into your event stream, and then find the users, documents, applications, databases and other knowledge sources, within your firm, or on the 'net, which are relevant to learning about and acting on that event.
We posit that it is this combination of events, tasks, users and knowledge / context, which will fully unleash the vision of the Digital Nervous System. When all of the people in your organization have finger-tip access to the events which are occurring - in real time, or near real time - within your organization, and convenient access to the related contextual knowledge surrounding those events, then you have the foundation for serious enterprise agility and responsiveness.
To this end, we are working on new features across our product line, which allow semantic concept extraction and automatic linking and referencing of entities with defined semantics from within your enterprise content, and then supports semantic queries against, and reasoning and inference over, that knowledge. Follow this blog, or follow our twitter feed for all the latest news and announcements as we continue down this amazingly exciting path. We can't quite give you the Star Trek Computer yet, but with Semantic Web tech applied in the enterprise, and combined with BPM, Business Events and Social Software, we will be giving you the most powerful tools yet for managing knowledge and information within your enterprise.
For more information on how you can begin to integrate Social, Events, BPM and the Semantic Web in your organization, contact us today.
Why The "Star Trek Computer" will be Open Source and Released Under Apache License v2
by Phillip Rhodes
Posted on Wednesday May 22, 2013 at 02:03PM in Technology
If you remember the television series Star Trek: The Next Generation, then you know exactly what someone means when they use the expression “the Star Trek Computer”. On TNG, “the computer” had abilities which were so far ahead of real-world tech of the time, that it took on an almost mythological status. And even to this day, people reference “The Star Trek Computer” as a sort of short-hand for the goal of advances in computing technology. We are mesmerized by the idea of a computer which can communicate with us in natural, spoken language, answering questions, locating data and calculating probabilities in a conversational manner, and - seemingly - with access to all of the data in the known Universe.
And while we still don’t have a complete “Star Trek Computer” to date, there is no question that amazing progress is being made. The performance of IBM’s Watson supercomputer on the game show Jeopardy is one of the most astonishing of the recent demonstrations of how far computing has come.
So given that, what can we say about the eventual development of something we can call “The Star Trek Computer”? Right now, I’d say that we can say at least two things: It will be Open Source, and licensed under the Apache Software License v2. There’s a good chance it will also be a project hosted by the Apache Software Foundation.
This might seem like a surprising declaration to some, but if you’ve been watching what’s going on around the ASF the past couple of years, it actually makes a lot of sense. A number of projects related to advanced computing technologies, of the sort which would be needed to build a proper “Star Trek Computer” have migrated to, or launched within, the Apache Incubator, or are long-standing ASF projects. We’re talking about projects which develop Semantic Web technologies, Big Data / cluster computing projects, Natural Language Processing projects, and Information Retrieval projects. All of these represent elements which would go into a computing system like the Star Trek one, and work in this area has been slowly coalescing around the Apache Software Foundation for some time now.
Apache Jena, for example, is foundational technology for the “Semantic Web” which creates a massively interlinked, “database of databases” world of Linked Data. When we talk about how the Star Trek computer had “access to all the data in the known Universe”, what we really mean is that it had access to something like the Semantic Web and the Linked Data cloud. Jena provides a programmatic environment for RDF, RDFS and OWL, SPARQL and includes a rule-based inference engine. Jena moved into the Apache Incubator back on 2010-11-23, and graduated as a TLP on 2012-04-18. Since then, the Jena team have continued to push out new release and advance the state of Jena on a continual basis.
Another Apache project, OpenNLP, could provide the essential “bridge” that allows the computer to understand questions, commands and requests which are phrased in normal English (or some other human language). In addition to supporting the natural language interface with the system, OpenNLP is a powerful library for extracting meaning (semantics) from unstructured data - specifically textual data in an unstructured (or semi structured) format. An example of unstructured data would be the blog post, an article in the New York Times, or a Wikipedia article. OpenNLP combined with Jena and other technologies, allows “The computer” to “read” the Web, extracting meaningful data and saving valid assertions for later use. OpenNLP entered the Apache Incubator on 2010-11-23 and graduated as a Top Level Project on 2011-02-15.
Apache Stanbol is another new'ish project within the ASF, which describes itself as “a set of reusable components for semantic content management.” Specifically, Stanbol provides components to support reasoning, content enhancement, knowledge models and persistence, for semantic knowledge found in “content”. With Stanbol, you can pipe a piece of text (this blog post, for example) through Stanbol and have Stanbol extract Named Entities, create links to dbPedia, and otherwise attach semantic meaning to “non semantic” content. To accomplish this, Stanbol builds on top of other projects, including OpenNLP and Jena. Stanbol joined the Apache Incubator on 2010-11-15 and graduated as a TLP on 2012-09-19.
If we stopped here, we could already support the claim that the ASF is a key hub for development of the kinds of technologies which will be needed to construct the “Star Trek Computer”, but there’s no need to stop. It gets better...
Apache UIMA is similar to Stanbol in some regards, as it represents a framework for building applications which can extract semantic meaning from unstructured data. Part of what makes UIMA of special note, however, is that the technology was originally a donation from IBM to the ASF, and also that UIMA was actually a part of the Jeopardy winning Watson supercomputer[1]. So if you were wondering, yes, Open Source code is advanced enough to constitute one portion of the most powerful demonstration seen to date, of the potential of a Star Trek Computer.
Lucene is probably the most well known and widely deployed Open Source information retrieval library in the world, and for good reason. Lucene is lightweight, powerful, and performant, and makes it fairly straightforward to index massive quantities of textual data, and search across that data. Apache Solr layers on top of Lucene to provide a more complete “search engine” application. Together, Lucene/Solr constitute a very powerful suite of tools for doing information retrieval.
Mahout is a Machine Learning library, which builds on top of Apache Hadoop to enable massively scalable machine learning. Mahout includes pre-built implementations of many important machine learning algorithms, but is particularly notable for its capabilities for processing textual data and performing clustering and classification operations. Mahout provided algorithms will probably be part of an overall processing pipeline, along with UIMA, Stanbol, and OpenNLP, which supports giving “the computer” the ability to “read” large amounts of text data and extract meaning from it.
And while we won’t try to list every ASF project here, which could be a component of such a system, we would be remiss if we failed to mention, at least briefly, a number of other projects which relate to this overall theme of information retrieval, text analysis, semantic web, etc. In terms of “Big Data” or “cluster computing” technology, you have to look at the Hadoop, Mesos and S4 projects. Other Semantic Web related projects at the ASF include Clerezza and Marmotta. And from a search, indexing and information retrieval perspective, one must consider Nutch, ManifoldCF and Droids.
As you can see, the Apache Software Foundation is home to a tremendous amount of activity which is creating the technology which will eventually be required to make a true “Star Trek Computer”. For this reason, we posit that when we finally have a “Star Trek Computer” it will be Open Source and ALv2 licensed. And there’s a good chance it will find a home at the ASF, along with these other amazing projects.
Of course, you don't necessarily need a full-fledged "Star Trek Computer" to derive value from these technologies. You can begin utilizing Semantic Web tech, Natural Language Processing, scalable machine Learning, and other advanced computing techniques to derive business value today. For more information on how you can build advanced technological capabilities to support strategic business initiatives, contact us at Fogbeam Labs today. For all the latest updates from Fogbeam Labs, follow us on Twitter
Essential Reading for IT Leaders: Part Two
by Phillip Rhodes
Posted on Thursday May 16, 2013 at 01:19PM in Technology
Following up on part one in our series on essential reading for IT leaders (that is, CIOs, CTOs, IT Directors, etc.), today we offer you 10 more titles to complement your existing technical chops. The theme of this group of titles is largely the same as before: IT leaders need to move beyond being strictly technologists, and develop a much deeper, more intuitive and more strategic focus on the strategy of the business. And while other people are asking Do you see CIOs becoming extinct? we argue that a CIO or CTO or IT Director who understands technology and strategy, and who can "bridge the gap" between the world of "the business" and the technology world, will always be an incredibly valuable asset and will have a role in any modern organization.
Here then, are 10 more titles to add to your reading list:
- The Future of Competition: Co-Creating Unique Value With Customers by C. K. Prahalad and Venkat Ramaswamy - If anyone can challenge Michael Porter's status as "the King of modern business strategy" then it's probably C.K. Prahalad. Prahalad is one of the greatest business thinkers to have ever lived, and this book was a masterpiece by a master. In The Future of Competition the authors develop the idea of co-creation which is a model of interaction with customers in which the customer is cast as an equal partner, rather than merely a passive recipient. The key idea is that maximum value is created by a mutual development process in which the firm and the customer work together to create personalized, unique solutions. In a world where even the most technologically sophisticated products are at risk of being commoditized, co-creation stands out as a way to step around that risk, deliver maximum value, and maintain deep customer engagement.
- The Innovator's Dilemma: The Revolutionary Book That Will Change the Way You Do Business by Clayton M. Christensen - A landmark book, The Innovator's Dilemma explores the reasons why firms innovate, but fail to gain a return on those innovations. The key idea here is that fear of cannibalizing an existing, legacy, business prevents firms for investing in, and promoting, innovative products, even while their competitors are moving past them. Understanding the ideas here is essential for executive leadership in any company which depends on continuous innovation for growth.
- The Innovator's Solution: Creating and Sustaining Successful Growth by Clayton M. Christensen and Michael E. Raynor - The followup to The Innovators Dilemma this title explores how firms can become producers of disruptive innovations, and adopt strategies to achieve disruptive growth through successful innovation.
- Outside Innovation: How Your Customers Will Co-Design Your Company's Future by Patricial Seybold - Outside Innovation is an excellent and in-depth analysis of how to work together with customers to develop innovative offerings. The ideas presented in this book overlap to some extent with those of The Future of Competition and the two titles complement each other well. Seybold's book is more "hands on", so to speak, with plenty of real world examples of what she calls "outside innovation", where the work by Prahalad and Ramaswamy is a little more academic.
- Crossing the Chasm: Marketing and Selling Disruptive Products to Mainstream Customers by Geoffrey Moore - This may be the most famous marketing book ever, at least in high-tech circles, and rightly so. Moore illustrates how the Technology Adoption Lifecycle is not a continuous curve, as it had generally been presented in the past. His analysis of the gap or "chasm" between segments of the curve represents an insight that changed high-tech marketing forever. If your firm has to deal with the challenge of introducing new, technologically innovative products to the market, you owe it to yourself to read Crossing The Chasm and put Moore's ideas into practice.
- Seeing What's Next: Using Theories of Innovation to Predict Industry Change by Clayton M. Christensen, Erik A. Roth and Scott D. Anthony - a followup to The Innovator's Dilemma and The Innovator's Solution, this title deals with predicting change within industries. Here Christensen and his colleagues provide a model for how to spot the signals of impending industry change, anticipate the outcomes of competitive engagements, and asses how a firm's strategy will affect it's future success (or lack thereof).
- The Balanced Scorecard: Translating Strategy into Action by Robert S. Kaplan and David P. Norton - Here, Norton and Kaplan lay out the fundamental ideas of the Balanced Scorecard which has become one of the most widely adopted performance management frameworks in business, and has evolved into a strategy execution process. The value of the Balanced Scorecard approach is its ability to help firms articulate their strategy in actionable terms, and to generate a concrete roadmap to implementing that strategy.
- The Strategy-Focused Organization: How Balanced Scorecard Companies Thrive in the New Business Environment by Robert S. Kaplan and David P. Norton - The successor to The Balanced Scorecard this is a further exploration of the ideas of Kaplan and Norton. Here they lay out five specific steps which are required to truly align strategy with operational implementation: 1) translate the strategy into operational terms, 2) align the organization to the strategy, 3) make strategy everyone's everyday job, 4) make strategy a continual process, and 5) mobilize change through strong, effective leadership.
- Strategy Maps: Converting Intangible Assets into Tangible Outcomes by Robert S. Kaplan and David P. Norton - Strategy Maps continues to dive deeper into the mechanisms of creating real alignment between a firms strategy and it's people, processes, systems and and information technology. Here the authors present powerful new tool - the Strategy Map - for documenting and managing the strategic intent of the firm, and mapping that strategy to tactical objectives.
- The Execution Premium: Linking Strategy to Operations for Competitive Advantage by Robert S. Kaplan and David P. Norton - Our final Kaplan & Norton recommendation of the day, The Execution Premium provides firms with the know-how needed to generate an effective strategy, plan the tactical implementation and execution of the strategy, and - perhaps most importantly - test and refine the strategy.
And there you have it... 10 title that IT leaders should familiarize themselves with, in order to be better equipped to analyze technological decisions, and evaluate technological capabilities, in terms of the strategies they support or enable. Armed with this kind of knowledge, IT leaders can prepare to move away from being thought of as specialists in only technology, and begin to become recognized and valued business strategists in their own right.
10 Essential Reads For CIOs, CTOs and IT Managers
by Phillip Rhodes
Posted on Monday May 13, 2013 at 12:04PM in Technology
Over the past 10 years or so, IT has found itself under fire from many quarters. "Thought Leaders" like Nicholas Carr write about how IT Doesn't Matter (we'll revisit that in a future blog post) and other pundits openly ask Is The CIO Dead?
The truth, of course, is that this is mostly hyperbole, but with an element of truth hidden underneath. And that element of truth is that IT does matter and CIOs are important... to the extent that the add value to the business. This means that IT leaders: CIOs, CTOs, and Directors of IT must move beyond a single-minded focus on technology, and begin to take a broader view of the organization, and they must understand how information technology provides capabilities that support and enable business strategy. And in order to truly have a "seat at the table" alongside the CEO, CFO and other traditionally respected positions within the organization, IT leaders must become trusted partners who routinely demonstrate the ability to add essential strategic insights to the conversation. To this end, I posit that CIOs and their ilk should set down the latest Hadoop book, close the Cassandra and Mesos tabs in their browsers, cancel the meeting with the sales guy from Microsoft, lock the door, and sit down and focus on business strategy and how technological capabilities can create new strategic opportunities for the organization, and support existing strategic initiatives.
As part of the "Re-education of the IT Leader", there are a handful of foundational works on business strategy, and a few titles on the intersection of strategy and technology, that I recommend to all technology leaders who want to gain more influence and relevance within their organization, and who want to contribute to the strategic direction of the firms where they work. At least a passing familiarity with the following works will greatly expand your ability to see things from the perspective of the CEO and to begin to think strategically.
So, with no further ado, here are 10 essential reads for CIOs, CTOs, Directors of IT and other IT leaders:
- Competitive Strategy: Techniques for Analyzing Industries and Competitors - Michael Porter's seminal title, this book basically created the modern field of strategic analysis. Porter's Five Forces model, laid out in this book, is one of the most well known and frequently encountered techniques for analyzing a firm's position within its industry and for thinking about strategic initiatives. If you can only read one book on business strategy, go to the source and read this one. Yes, it's somewhat academic and can be a bit dry and terse at times, but it's worth the effort. Your CEO, or the consultants your CEO hires, will be talking in terms of the vocabulary defined in this book.
- Capability Cases: A Solution Envisioning Approach - this is an excellent follow-up to the first Michael Porter book, as it builds on Porterian strategic analysis and presents a methodology for using that analysis to generate new strategic initiatives, which will be supported by technological capabilities, and then building a case for those capabilities, in terms of the impact on the business. Adopting an approach like this is how a CIO can move from being seen as filling a strictly tactical or operational role, into being seen as an influential strategist in his/her own right. Note that we have written extensively on the Capability Cases approach here, and you may find that useful reading as well. See: Part One, Part Two, and Part Three in our series on Capability Cases.
- Adaptive Enterprise: Creating and Leading Sense-And-Respond Organizations - If you want to understand and talk about possible sweeping changes to the very DNA of your organization... new business structures that are better equipped for competing in the 21st century - like "Sense and Respond" management - then this book is for you. The authors present a theory of a fundamentally different way of structuring organizations, which result in a much more responsive and agile organization which is much better suited to compete and survive in an era of hyper-competition.
- Business @ The Speed of Thought - Bill Gates wrote this back in the late 1990's and it's as relevant today as it was then. In this book, Gates presents a vision for what he called the "Digital Nervous System". As an analogy to the human autonomic nervous system, the "Digital Nervous System" represents the way in which a firm's IT system enable the flow of signals and information within the "body" of the firm, allowing coordinated decision making and agile manoeuvring. And despite the decade plus which has passed since this title appeared, most firms still do not truly use their IT systems in this way, to the present day. For more information on achieving this goal, see our article: Digital Nervous System.
- Peripheral Vision: Detecting the Weak Signals That Will Make or Break Your Company - the title really says it all. In a hypercompetitive era like the one we operate in now, it is more important than ever to be able to detect changes in your environment and react to them appropriately. Consuming massive amounts of data, applying automated analytics and extracting meaningful insights, are tasks where IT is absolutely essential. In this regard, IT can serve to provide a sort of "radar sensing" capability to the organization, which allows it to "see through the fog" and avoid danger. This title is a deep dive into the importance of, and techniques for, discovering these "weak signals" and surfacing them for your enterprise.
- Understanding Michael Porter: The Essential Guide to Competition and Strategy - if you don't have time to read all of Michael Porter's works first-hand, this is a great "Cliffs Notes" review of his thinking and techniques. Written by one of Michael's students and collaborators, this book provides a solid overview of Porterian strategic analysis, without being quite as dry and terse as the original source material. Reading this is still not truly a substitute for reading Porter himself, but it's a good start, and is definitely better than not studying the subject at all.
- Competitive Advantage: Creating and Sustaining Superior Performance - another seminal work by Michael Porter, this builds on and expands the ideas present in his first title. As before, this is essential reading for anyone who seeks to be a knowledgeable business strategist.
- On Competition - The third and final Michael Porter book in this list, it finishes fleshing out the overall field of Porterian strategic analysis. As with his other works, it is somewhat academic and isn't exactly leisure reading, but the payoff is more than worth the effort.
- Good Strategy, Bad Strategy - a lot of people bandy the word "strategy" around, and a lot has been written on the topic. This work by Richard Rumelt dives into what "strategy" actually is, and helps explain what is and isn't actually strategy, and helps you to distinguish between good strategy and bad strategy. This is a great complement to all the Michael Porter stuff previously listed.
- Business Model Generation: A Handbook for Visionaries, Game Changers, and Challengers - written by Alexander Osterwalder, this book is often recommended to founders of startups, but it has application inside any business... especially one which feels besieged and under fire from all directions, and which is desperately trying to find new ways to compete in an increasingly competitive environment. Competition isn't solely based on your products! You can also compete by altering your business model, and that is the essential discussion in this book: What is a "business model" and how to you create a new one? And how do you know if the new one is good? Read this book and you'll have some very powerful arrows in your quiver, in terms of presenting new strategic models for your enterprise.
There are, of course, more than ten titles that I might recommend to a CIO, CTO or Director of IT (or to anyone in business for that matter). And while this list gets at some real essentials, it is by no means comprehensive. Please share your own suggestions, comments and observations in the comments on this post. For even more great suggestions from the Fogbeam Team, see Part Two of our list.
If you've read this far, please visit us at Fogbeam Labs and/or follow us on Twitter.
Why The Kiera Wilmot Situation Is Bad For America
by Phillip Rhodes
Posted on Friday May 03, 2013 at 02:12PM in Technology
In case you missed the recent news, a 16 year old Florida high-school student named Kiera Wilmot was expelled from school and charged with felony counts of "possession/discharge of a weapon on school grounds" and "discharging a destructive device" for conducting a harmless science experiment which resulted in a small explosion... which injured no one and caused no harm.
Predictably, the story has sparked a firestorm of controversy. Add in that fact that the student in question is female and black, and the story has quickly morphed into one focusing on the possible racist and/or sexist influences involved. And yes, there likely are overtones of both sexism and racism influencing the public officials involved in this story. But I think focusing on that is missing a much larger issue, and one that represents bad news for all hackers, makers, DIY'ers, amateur scientists... and for America as a whole. There are serious economic consequences to the kind of narrow-minded, overly risk-averse, brain-dead thinking which leads to a story like this.
Simply put, our society seems to be moving towards an overly protective, overly risk-averse, parochial mindset, where we are all encouraged to accept blind conformity to "authority" in the name of "safety". Students are arrested for harmless experiments, at a time when business leaders around the country are screaming for improvements in STEM education, at a time when our country is facing a continuing severe economic crisis, and a time when we may or may not be balanced on the precipice of a "manufacturing renaissance" which could bring jobs to the unemployed, and bolster the economy across the board. But what message are we sending to innovators, especially young ones, when incidents like this happen? And what about the damage done by people granted "authority" over others, as so well demonstrated in the (in)famous Stanford Prison Experiments?
I contend that this Kiera Wilmot story, and similar stories, will have (or have had) a "chilling effect" on all the hackers, makers, DIY'ers, amateur scientists and hobbyists around the country, who are working to educate themselves, create new things, and provide the basis for a future generation of technologically savvy, well-educated, innovative citizenry which our nation needs. And this is at the worst possible time... the DIY movement, or "maker movement", whatever you want to call it, has been flourishing for a few years now. Hackerspaces are popping up all over, individuals are buying (or better yet, building) their own 3D printers, CNC milling machines, robots of various sorts, and are learning and creating and making at blinding pace. Heck, even Radio Shack have re-embraced the DIY crowd - which they had abandoned decades ago - and now sell Arduino microcontrollers and an expanded selection of discrete components and electronic kits.
So, just at the time when young people may be starting, ever so slowly, to embrace technological exploration, science, electronics, robotics, etc., we throw a cold glass of water in their faces, by demonstrating that "doing science on your own will mean going to jail for the smallest mistake". And what does it say to the people holed up at their local hackerspace, working on DIY fusion research, or high-voltage electronics experimenting, or anything else with even a slight "danger factor"? Are people going to be less likely to experiment and participate in shaping the future, when the threat of going to jail for a harmless mistake is lingering in the air?
Sadly, this is not a new story. People have been lamenting, for example, the restrictions on components found in chemistry sets for years... But it's a big jump from restricting access to components needed to run an experiment, to putting someone in jail for simply running an experiment in which no one was harmed and nothing was damaged. Let me re-iterate that last bit... despite the "explosion" no one was harmed and no property was damaged. And yet, this young lady is still being charged with felonies and will be tried as an adult. A spokesperson for the school district said:
We urge our parents to convey to their kids that there are consequences to their actions,
This is wrong... there are (or should be) consequences for the outcomes of actions. An action which causes no harm or injury, should *not* have any punishment associated with it. Otherwise we will have to ask "what are the consequences of brain-dead educational policies that dampen curiosity, discourage learning and experimentation and turn kids away from science"? Personally, I don't think we want to experience those consequences.
Prolog? I'm Going To Learn Prolog??
by Phillip Rhodes
Posted on Friday May 03, 2013 at 12:34AM in Technology
Sometime ago I blogged about the availability of some great resources for learning Prolog. At the time, the available materials I'd found were:
- Artificial Intelligence through Prolog
- Introduction to Prolog for Mathematicians.
- Building Expert Systems in Prolog
and
Logic, Programming and Prolog
and
Prolog Programming: A First CourseEdit (05-08-2013): A helpful Hacker News commentator has pointed out another good title for inclusion in this post: Luckily there are also a number of high quality implementations of Prolog available, including GNU Prolog, SWI Prolog and Ciao Prolog. Now to find some time to dig in... Edit (03-30-2009): Apropos, this link just appeared at the top of programming.reddit.com. Good stuff. Edit (05-04-2013): For anybody who doesn't get the reference in the title of this post: