Monday, September 3, 2018

Engines and fuel - who writes quality content?

Tweetise.

In software, everyone wants to build engines, and no one wants to make fuel. A platform for executing content has high potential leverage and lots of vendors make those. The expected community of fuel makers rarely materializes.

Content for software engines breaks down along two axes: simplicity versus complexity and generality versus specificity to the customer’s environment. Half of the resulting quadrant is unsuitable for sharing communities, because it’s not general.

Simple and customer specific: a list of assets and identities. Vendors clearly can’t do these at all, so they make management tools. This quadrant is an obvious dead zone for content.

Complex and customer specific: personnel onboard and termination processes. Again, dead zone.

Sad times occur when companies try to operate in one of the dead zones: for example, the process automation engine. A hypothetical vendor faces years of customer issues root caused to process failures, so they decide to help customers succeed by automating the process.

Turns out that the customers who think about process already have 20 of these on the shelf. The customers who don’t? Some aren’t interested, and some want to be told what they should be doing. They need fuel, and the vendor can’t give it to them without professional services.

Complex and general: compliance tests for common off the shelf (COTS) solutions. This is where in-house content teams are justified; their success is measured in lower sales cycle times and professional services spend. Those metrics are hard to defend, but that’s another story.

Compliance auditing is an excellent place to observe this type of content in the market. Anything that can execute script on an endpoint and return results can be used to check compliance with (say) the PCI dirty dozen.

You’d be mad to do this with a tool that doesn’t already have some content though. Who wants to maintain the scripts to determine if all your endpoint agents are getting their updates? So customer demand for content is high.

The supply side is more challenging. The engine vendor makes some content because they need it, but the work is harder than it appears and they’re eager to share. Why can’t they? There’s four paths they might try.

1: Pays their own developers to write content. Mostly gets what they want at high cost. 2: Pays others to write it, through outsourcing or spiffs. Mostly gets less than they want at high cost. 3: Markets a partner program and SDK. Mostly gets nothing, at low cost.

4: They do nothing and hope for the best. In a strong social community, this can actually work great. Participants learn new skills, improve their employability and social standing, and genuinely make the world a bit better. Without that community, vendor had better pay up.

The strongest motivation to make complex content for an engine to execute is if you own that engine or make a living with it. Next is if you improve your individual standing with this work. The weakest motivation is held by other software companies seeking marketing synergy.

Which brings us to the last quadrant, simple and general: a malicious IP address blacklist. This is where entire companies are justified through attractive-looking profit margins; there success can be measured in the usual metrics of sales.

The threat intelligence market is a recent example of this effect. TI comes from four sources: internal teams, paid vendors, private interest communities, and open source communities. In the first three, employees produce quality content for real world use cases.

Ask your friendly neighborhood security expert which TI sources they prefer, and I expect the answer will look like @sroberts’: https://medium.com/ctisc/intelligence-collection-priorities-10cd4c3e1b9d

Taking responsibility for TI content leads to increasing risk avoidance as well, further reducing its value. Over time the developer facing a flood of support tickets will err on the side of caution, accept more false positives, and add caveat emptor warnings.

Another interesting factor in these models is the mean time to maintenance. Threat intel needs analysis of fast-moving criminal groups and rapidly decays in value. Compliance content relies on analysis of slow-moving components and can last for years of low maintenance costs.

I think that this dichotomy in maintenance cost holds across most examples in the simple to complex axis. Connectivity drivers are complex and last for a long time. VM or container definitions are simple wrappers around complex content and last for a short time.

The requirement for maintenance defines whether the vendor offers support for the content, which in turn defines many customers’ willingness to depend on it and consultants’ willingness to provide it.

Playing it out, higher maintenance content is less supported, and therefore more likely to be used by risk-embracing customers with strong internal teams and bias towards community software. Lower maintenance, higher support, more services dollars.