Saturday, November 10, 2018

Licensing thoughts, round two


Tweetise.

License Models Suck got a lot of interesting conversations started, time to revisit from the customer’s perspective. Let’s also be clear, this is enterprise sales with account reps and engineers: self-service models are for another day.

As a vendor, the options I describe seem clearly different; but as a customer I just want to buy the thing I need at a price that works. “Works” here means “fits in the budget for that function” and “costs less than building it myself or buying it elsewhere”.

A price model has to work when growth or decline happen.  As a customer I build a spreadsheet model to find if the deal would quit working under some reasonably likely future scenarios. If it passes that analysis, fine. I don’t care if the model is good or bad for the vendor.

So, the obvious question: why doesn’t flat rate pricing rule the world? It’s certainly the easiest thing to model and describe! Answer: organizations are internally subdivided.

The customer may work at BigCo, and BigCo may use some of the vendor’s products, but the customer doesn’t need to buy for all of BigCo. They need to solve the problem in front of them. Charging them a flat BigCo price for that problem doesn’t work.

What’s more, the customer can’t do anything to make it work. Maybe they can help the sales team pivot this into a top-down BigCo-wide deal, but that’s going to take a long time and require all sorts of political capital and organizational skill that not every customer has.

This is easy to solve, right? Per-unit pricing is the answer! Only, we’re talking enterprise sales and products that require hand-holding. The vendor has a spreadsheet model too, and that model doesn’t work if a sales team isn’t producing enough revenue per transaction.

If the customer’s project isn’t big enough, then the deal won’t work with per-unit pricing. In response, the vendor will drop deals that are too small, set minimum deal size floors for their products, or make product bundles that force larger purchases.

If the customer has no control over the number of units, a per unit price might as well be a flat rate. There’s no natural price elasticity, and the only way to construct a deal is through discounting.

Why not get unnatural then? Just scale the price into bands! You want 10 of these? That’s $10,000 each. You want 10,000 of these? That’s $10 each. Why not sell the customer what they want?

Because the cost to execute a deal and support a customer is variable and difficult to model, and the more complex a pricing model is, the less clarity your have into whether your business is profitable and healthy.

The knock on effects from that non-clarity are profound, because they affect anything that involves planning for the future. It’s more difficult to raise capital or get loans, negotiate partnerships, hire and retain talent.

And so we mostly see fairly simple pricing systems in mid-sized enterprise software vendors. I’m most familiar with “platform with a unit price, less expensive add-ons locked to the same unit quantity.”

This pricing works for the middle of the bell curve, but small customers are underserved while large customers negotiate massive discounts or all-you-can-eat agreements that can hurt the vendor.

Sunday, October 28, 2018

Phases of Data Modeling

Say that you want to use some data to answer a question. You’ve got a firewall, it’s emitting logs, and you make a dashboard in your logging tool to show its status. Maybe even alert when something bad happens. You’ve worked with this firewall tech for a few years and you’re pretty familiar with it.

You’ve built a tool at Phase 1. A subject matter expert with data can use pretty much anything to be successful at Phase 1. That dashboard may not make a lot of sense to anyone else, but it works for you because you’ve seen that when the top right panel turns red, the firewall is close to crashing. You know that the middle left panel is a boring counter of failed attackers, while the middle right panel is bad news if it goes above 3.

One day your team gets a new member who’s interested in firewalls and they start asking questions. You improve the dashboard in response to their questions, and other teams start to notice. Some more improvements and you can share your dashboard with the community. Maybe it gets you a talk at a conference. This is a Phase 2 tool. People don’t need to know as much as you do about that firewall to get value from your dashboard.

So far so good... but now you start to get some tougher questions. “Can I use this in my SIEM?” Or “can you do the same thing for this other firewall?” Now you’re getting asked to put this data into a common information model.

This is a Phase 3 problem. Simply understand the data sources and use cases well enough to describe a minimalist abstraction layer between them. There is some good news here, because Phase 3 tools are hard to do and therefore worth money. Why? Well, let’s look at the process:

1. Read the information model of the logging or security product in question and understand what it’s looking for. There’s no point in modeling data it can’t use.
2. Find events in your data that line up with the events that the product can understand. Make sure they’re presenting all of the fields necessary, figure out how you’ll deal with any gaps, and describe the events properly.
3. Test that it works, then start over with the next event. Continue until you’ve gotten everything the model covers now.
4. Decide if it’s worth it and/or possible to extend the model and build the rest of the possible use cases.
5. Decide if it’s worth rethinking your Phase 1 and Phase 2 problems in light of the Phase 3 work (probably not).

This is tedious work that requires some domain knowledge. That doesn’t mean you should wait until the domain knowledgeable wizard comes along... domain knowledge is gained through trial and error. Try to build this thing! When it doesn’t work, you can use this framework to find and fix the problem.

Let’s also consider a common product design mistake. When using this perspective, it’s easy to think that the phases are a progression through levels, like apprentice to journeyman to master. Instead, these phases are mental modes that a given user might switch between several times in a working session.

I’m fairly proficient with data modeling, but that doesn’t make me a master of every use case that might need modeled data. An incident response security analyst may be amazing at detecting malicious behavior in the logs of an infrastructure device, but that doesn’t mean they actually understand what the affected device does.

This distinction is important when product designs put artificial barriers between phases of use, preventing the analyst from accessing help they need in the places they need it, or preventing them from moving beyond help they don’t need. More on product design next week.

Not a tweetise, just a link

Sunday, September 30, 2018

Weekly Status

Tweetise

People are creatures of habit, and effective work is produced by grooming useful habits. Here’s a quick write up of a useful habit: the weekly status report.

I haven’t always written these, and I haven’t always worked for people who’ve wanted to receive them, but I’ve been at my most effective when I was writing and discussing them.

A weekly report of your status is a distillation of the most important things that have happened in the last few days. It’s also an agenda for the next week, and a chance to reflect. It can also help you actually have a weekend, because you’re closing the books on Friday.

How to work this magic? You’ll need a text editor. I’m also fond of a cloud service for syncing text documents. You’ll need a communication tool too: email, slack, or a wiki.

The document: a simple text document with no formatting.

Hi,

Meta:
* 1 line about you. Happy? Sick? Overworked?

$project:
* 1-3 single line statements of status affecting events.
* Started X
* Y Ongoing
* Finished Z
* Last release, date, purpose
* Next release, ETA, purpose
* The goal after that

*Repeat as needed.*

Thanks,
$me

Every Friday when I’m about ready to call the day done, I open this document and replace last week’s material with this week’s. I reflect on how I’m doing and how that presents. Same items not moving? Can’t stand looking at this any more? I need help and this is my chance to ask.

Sync: If it’s possible to put this text block in a cloud sync service, then it’s possible to do this on your phone while riding to the airport or standing in the boarding line. That’s remarkably useful. The big thing is to see what you wrote last week.

A push based communication is ideal, because the recipients aren’t going to look at a web page. They’re all too used to safe and boring status, so don’t be boring. Email or Slack work. Skip the formatting and pictures. Just the status.

I’ve been in teams that used wikis or Evernote for status updates, and it can work, but it’s notably worse; those are the teams where a lot more phone calls were needed. There’s a reason those tools all send email notifications.

Finally, who to send your status to? Your manager is supposed to be thrilled to get a concise, timely, and accurate ping of status. However, folks sometimes fall short of ideals, and that doesn’t have to stop you from doing this work for yourself.

Given sufficient tuning and need, the weekly status can go to your teammates, your direct reports, or a cross-functional group. I do think it’s important to send it to someone, otherwise it’s a diary. But as in any writing, think of the audience.

Sunday, September 23, 2018

Community

Tweetise.

So you’re a software company, and you want to have a community. What next?

“Why community” is a great place to start: the stated reasons and budget are often somewhere in marketing, but the community is equally important for customer support. Community is where soft guidelines are communicated, FAQs are advertised, and newcomers are made welcome.

All of that means reduced customer support costs, because the folks that are answering these questions aren’t on your payroll. Note that also means you don’t have a lot of control over what they say, so we’ll dig into that in a bit.

A software community is a forum for discussions about your software and the problems that it solves. This may take many forms, non-exclusively. Asynchronous email lists (Mailman) and fora (Lithium). Synchronous channels like Slack, or face-to-face user groups and conferences.

In an ideal world these are all options on the table, but there’s a very definite cost gradient to consider. The more synchronous you get, the more it costs for fewer people; but they get better results. Support may be a major beneficiary, but they have no budget power.

Marketing is the team paying for this if anyone does, so the dollars are entirely dependent on the community’s ability to meet marketing’s agenda. That can be an issue for the types of folks who offer free support for someone else’s software.

Who are those community members, anyway? They are wonderful gems. Customers, pro service partners, maybe internal employees who just can’t get enough. They’re putting “spare time” into your support forum because they care about people being successful, with your product.

They’re also doing work for themselves, building a community reputation. They’re the pool you’ll hire from as you grow. In the meantime, are you offering them a path to stay with you? Certifications? Awards? Where’s the public recognition of their effort?

Unfortunately, people are people and those nobly motivated activities might get blurred by bad behavior. While solving your problems, your community may also air views on race, sex, religion, politics. Fights happen. Do you even know, and are you prepared to keep the peace?

Moderation is absolutely required if you don’t want your community to turn into a cesspool. And so we return to the question of budget. Moderation means people, and people gotta eat, and quality people expect quality pay and tools for their job.

At a tiny scale, your company is able to do this work “on the side”. Just like the social engineering of people and project management, your star employees quietly shoulder it all while you congratulate yourself on not actually needing those functions.

Don’t kid yourself; there’s someone taking care of the social work you’re not seeing, and you’d better recognize their contribution before it stops. Keeping people working well together doesn’t just happen.

At a massive scale, there’s so much moderation and so much community that tiny and medium communities are forming around the main communities. If you’re getting a B-Sides, you’ve got a whole new set of problems.

The medium sized scale is where things are toughest. Big enough to truly need part-time or full-time paid help, but small enough to question that need and try to half-ass it. So, for those in that boat, let’s consider what a successful community looks like.

New users are welcomed & their problems are answered correctly. People are free to be themselves, but bigotry and bullying are not tolerated. Thorny problems get redirected to proper channels. Fights are resolved promptly without collateral damage.

The stars of the community are recognized and rewarded, regardless of where their paychecks originate. They keep magnifying your reach because they’re feeling good about doing that.

If that doesn’t sound like your community, you might be better off shutting it down until you hire someone to do it right. Buying tools isn’t going to help.

Sunday, September 16, 2018

Security Logging

Tweetise form.

Security logging is interesting. Detecting security and compliance issues means uncovering nasty little leakages of unintentional or surprising information all over. When you add a powerful security tool to the environment, it starts to shine light into dark corners.

No one expects that temporary file of sensitive data or the password in a script to be recorded. Credential tokens start safe, but get copied to unsafe paths. They’re not intentional flaws, but rather hygiene issues.

If a tool detects security hygiene issues, the responding team must decide if they believe the tool or not, and then what to do about it. As a vendor planning that security tool, figuring out which way the customer team will go is an existential crisis.

Obviously, if the customer doesn’t believe the tool, that sale isn’t made or that renewal doesn’t happen. Less obviously, even if the customer does believe the tool, success is not guaranteed. The social angles are too complex for today’s thread.

The logical path for tool developers is to log any data, offending or otherwise.
It’s impossible to describe every possible problem scenario & filter objectionable material. Even getting low hanging fruit is bad, it builds an expectation that the tool solves hard problems too.

Worse, if the tool does not record the raw data and only records that a user did a prohibited thing at place and time... then the tool won’t be trusted. The user doesn’t remember doing a bad thing, and now it’s human versus log. Human wins.

So financial pressure leads to security tools logging everything they see. This is not ideal because it can mean worsening the security situation by logging and transmitting secure tidbits. Instead of searching every mattress in town, our raccoon-masked baddie can rob the bank.

Because belief is ahead of action in the customer’s decision path, data collection problems are true of failing security tools as well as successful ones. Everyone wants to be trusted, so everyone records at high fidelity.

Encrypt all the things is then used to protect these high value stores. I’m reminded of the DRM problem though... the data has to be in usable form to get used, so there’s always an exposure somewhere. Makes you wonder how many SOCs have extra folks listening in.

Sunday, September 9, 2018

Disrupting Ourselves

Tweetise here

Let’s talk about some received wisdom: “disrupt your own market before someone else does it to you”. Sensible advice: complacency can kill. Except disruption is generally a pioneering activity, and the survival rate for pioneers is lower than for copycats.

Corporate blindspots being what they are, this style of transition is more often a new company’s opportunity to disrupt an existing market. When done internally, it’s as disruptive as calving a new company.

Still, let’s assume our company has decided to change. Further assume that we’re not completely altering business model from vertical integration to horizontal commoditization or vice versa. That takes executive team guidance, but I generally write about technology companies.

There are many architects with opinions on horizontal versus vertical technology stacks. Worse, they win budget to shift the stack under the rubric of self-disruption. Horizontal and vertical both work, so a team can start anywhere on the cycle and shift to the next step.


Moving from vertical to horizontal:
* Identify functional components
* Abstract those components with APIs
* Replace the ones that can’t elastically scale
* Start writing large checks to your IaaS of choice

That’s all fairly straightforward for a new project, but if you’ve got an existing customer base there’s some challenges.
* Maintain performance and quality while complicating architecture
* Decide to expose or hide the APIs… Who’s embracing and extending who?

Worst of all:
* Does the license and business model still work after this change, or do you need to revisit product market fit?
* Backwards compatibility... well if you’re not Microsoft, let’s all have a good laugh over that one.

Moving from horizontal to vertical:
* Identify painful integrations that need consolidating.
* Define interfaces where your solution will tie into the rest of the world.
* Execute. Ease of purchase, use, and assurance. Buyer must feel confident they didn’t make a mistake here.

There’s no lack of startup memoirs. Doing it from within a company is gnarlier, disrupting your own existing system. Professional services and the partner community are going to ask some tough questions. Sales and marketing might not be thrilled about rewriting the playbook.

Transition is reimplementation of capabilities, meaning forward progress slows or halts for at least a year. Strong support in a fat Q2 evaporates in the following lean Q1. Teams that mismanage their planning find their work going into the bitbucket, along with some executives.

To forestall that reckoning, leadership spends significant effort badmouthing existing product: hopelessly outdated, unscalable, and just bad. This is easy and successful; and therefore the worst damage of the entire process. It burns the boats and commits the company.

Once “Something must be done” is accepted wisdom, all manner of crazy can be considered reasonable. Add some sunk costs and it takes a major crisis to reset direction.

Monday, September 3, 2018

Engines and fuel - who writes quality content?

Tweetise.

In software, everyone wants to build engines, and no one wants to make fuel. A platform for executing content has high potential leverage and lots of vendors make those. The expected community of fuel makers rarely materializes.

Content for software engines breaks down along two axes: simplicity versus complexity and generality versus specificity to the customer’s environment. Half of the resulting quadrant is unsuitable for sharing communities, because it’s not general.

Simple and customer specific: a list of assets and identities. Vendors clearly can’t do these at all, so they make management tools. This quadrant is an obvious dead zone for content.

Complex and customer specific: personnel onboard and termination processes. Again, dead zone.

Sad times occur when companies try to operate in one of the dead zones: for example, the process automation engine. A hypothetical vendor faces years of customer issues root caused to process failures, so they decide to help customers succeed by automating the process.

Turns out that the customers who think about process already have 20 of these on the shelf. The customers who don’t? Some aren’t interested, and some want to be told what they should be doing. They need fuel, and the vendor can’t give it to them without professional services.

Complex and general: compliance tests for common off the shelf (COTS) solutions. This is where in-house content teams are justified; their success is measured in lower sales cycle times and professional services spend. Those metrics are hard to defend, but that’s another story.

Compliance auditing is an excellent place to observe this type of content in the market. Anything that can execute script on an endpoint and return results can be used to check compliance with (say) the PCI dirty dozen.

You’d be mad to do this with a tool that doesn’t already have some content though. Who wants to maintain the scripts to determine if all your endpoint agents are getting their updates? So customer demand for content is high.

The supply side is more challenging. The engine vendor makes some content because they need it, but the work is harder than it appears and they’re eager to share. Why can’t they? There’s four paths they might try.

1: Pays their own developers to write content. Mostly gets what they want at high cost. 2: Pays others to write it, through outsourcing or spiffs. Mostly gets less than they want at high cost. 3: Markets a partner program and SDK. Mostly gets nothing, at low cost.

4: They do nothing and hope for the best. In a strong social community, this can actually work great. Participants learn new skills, improve their employability and social standing, and genuinely make the world a bit better. Without that community, vendor had better pay up.

The strongest motivation to make complex content for an engine to execute is if you own that engine or make a living with it. Next is if you improve your individual standing with this work. The weakest motivation is held by other software companies seeking marketing synergy.

Which brings us to the last quadrant, simple and general: a malicious IP address blacklist. This is where entire companies are justified through attractive-looking profit margins; there success can be measured in the usual metrics of sales.

The threat intelligence market is a recent example of this effect. TI comes from four sources: internal teams, paid vendors, private interest communities, and open source communities. In the first three, employees produce quality content for real world use cases.

Ask your friendly neighborhood security expert which TI sources they prefer, and I expect the answer will look like @sroberts’: https://medium.com/ctisc/intelligence-collection-priorities-10cd4c3e1b9d

Taking responsibility for TI content leads to increasing risk avoidance as well, further reducing its value. Over time the developer facing a flood of support tickets will err on the side of caution, accept more false positives, and add caveat emptor warnings.

Another interesting factor in these models is the mean time to maintenance. Threat intel needs analysis of fast-moving criminal groups and rapidly decays in value. Compliance content relies on analysis of slow-moving components and can last for years of low maintenance costs.

I think that this dichotomy in maintenance cost holds across most examples in the simple to complex axis. Connectivity drivers are complex and last for a long time. VM or container definitions are simple wrappers around complex content and last for a short time.

The requirement for maintenance defines whether the vendor offers support for the content, which in turn defines many customers’ willingness to depend on it and consultants’ willingness to provide it.

Playing it out, higher maintenance content is less supported, and therefore more likely to be used by risk-embracing customers with strong internal teams and bias towards community software. Lower maintenance, higher support, more services dollars.