Sunday, November 3, 2019

Offering Multiple License Models

I’ve written quite a bit about licensing software now, you can start here to follow the whole thread. In The Platform License Problem, I mentioned some free pricing as a hide-the-sausage technique. When there are multiple markets to find product fit for, and the vendor has a software base that tackles those markets, the platform problem applies. But burying the cost of a shared platform isn’t the only reason to give away software, so let’s look at some more ways that can happen.

Freemium has grown very popular in the shadow IT, consumer tech, and open source based tech markets. With a freemium model, consumers can get your product for free without support, but have to pay for “extras”: additional features, related services, and/or support contract. The pwSafe password manager is free, but cloud-based synchronization costs. The Bear text editor is free, but advanced features cost. Splunk is free to use up to 500 megabytes a day, but costs quite a bit for more. One might say these models are equivalent to a free trial on a service, such as getting Apple Music or Spotify for a free trial. However, freemium is different in that there is no time limit; instead of getting the same features when you start paying or losing the service when you don’t pay, with freemium you can use the free version forever and get more features when you pay. It is more like the shareware model, without nag screens.

Shareware, for those who don’t remember it, was a try-before-you-buy model popular in the early Internet. The customer can download the package and use it with full functionality for a limited time, but must buy a key to continue. After timing out, the software might stop working, or continue working with nag screens or watermarks on its output. Freemium packages hosted on app stores have largely wiped this model out because they allow the small developer to outsource the tedious and complex work of handling customer payments.

Even without the app store’s help, freemium models can be great for vendors. They encourage word of mouth networking, as well as advertising from taste-makers and aggregators. Reviewers and users are happier to recommend products that new users can get for free. New users are happier to try products for free. The vendor gets low fidelity signal from the downloads and traffic of a broad pool of potential customers, and can experiment with the potential for features to convert those customers. The vendor can also instrument the free product for higher fidelity feedback. The only downside for a vendor is in mis-calibrating the free/paid version; too much value for free means failure to capture revenue. As long as costs are covered though, those are theoretical losses that can be ignored in the interest of living a peaceful life. There’s always the possibility to add a better paid feature which changes the picture.

Lite editions of premium products are a slightly different take on the same theme. Typically introduced after a successful product has gained traction, the lite edition is used to expand that product’s influence without requiring a full license for every user. AutoCAD, Visio, and Adobe Acrobat all played this game very well. This model is growing less popular as vendors embrace cloud and app-store delivery of software, but the fashions of software are fickle.

The ultimate extension of the lite version is open source: the software is free to use without time limits, and the vendor must build a business around it using services, support contracts, or enterprise features. It can be harder for an open source vendor to calibrate that free/paid value balance as well, particularly when SaaS offerings erode the value of service and support.

These models all have something in common: they allow the vendor to put basically the same software into multiple customer markets. The target market of customers willing to pay for the product is satisfied, while a broader market of potential customers is also satisfied at very low risk. The vendor can easily striate further by offering a “super pro” version for even more money, limiting complex features to the customers who need them.

It’s a great option that costs even less when starting off from a SaaS base. That said, alternate consumption models can also be overly complicated when you’re still trying to find product market fit. That struggle is particularly challenging for the open source vendors who start there on day one, with a default business model of linear-growth services that faces significant erosion risk.

Saturday, October 26, 2019

The Platform License Problem

In my other three posts about licensing, I discussed simple products. But what about platform companies?

A platform company sells two types of products: the platform, which enables everything else, and the use cases which rely on that platform to solve specific problems. The key to the platform company definition is that the solutions will not work without the platform; they are add-ons sold by the first party. You can’t buy the add-on without the platform.

This model is really exciting for vendor and customer because it means lots of different problems solved In the same way, with a single decision. There’s an interesting pricing challenge down this road though: the platform plus one add-on is less compelling than the platform plus many add-ons. Worse, the platform cost buoys the total price to a point higher than single purpose competitor products. Result? The land and expand rarely works out in first deal pricing, unless the customer cuts to the chase and buys more add-ons in the first deal.

Every platform company has this problem.  Bundles, bands, and hide-the-sausage are the only ways I know to resolve it, by encouraging multiple add-ons to be purchased in the land stage.

• Bundles: Either permanently or on promotion, sell several things together so the platform price isn’t so glaring. This doesn’t solve the single-purpose entry point problem, but it makes jumping straight to expansion more appetizing. See anything with “Suite” in the product name.
• Bands: Same thing with more complexity. See Microsoft’s Office365 price book.
• Hide-the-sausage: Spread the costs of the platform by making it “free” or “cheap” and increasing the cost of all the modules. Discourages customers from buying many solutions unless combined with bundling or banding to force a second discounting scheme in. See Google.
• Of course, hide-the-sausage can be reversed: charge once for the platform and then make all the add-ons free. Doing so reverses the incentives and encourages customers to download lots of add-ons, increasing support and development costs and decoupling financial signals from product development. This is a great way to cross the Bill Gates line: your apps are published as guidance, and your partners are encouraged to make the money that you’re not making on your platform. See Salesforce.

There is no best option, in my opinion. I will quote Clint Sharp’s comments on pricing model changes though: “a great way to initiate a denial of service attack against your PM team is to constantly start up new debates about pricing models.”

Licensing thoughts continued...

Saturday, October 5, 2019

Scripts for Adulting


  1. Hello, I’ve been admitted to the 2019 class and I have a question about my high school grades. Can you help? My reference number is #######. * Get the dates and account numbers together ahead of time.
  2. I’m going to get a bad grade in a class, or possibly a withdrawal. * Just the facts! They don’t care what happened.
  3. will this affect my acceptance to university?
  4. does it make a difference if I take the bad grade or the withdrawal?
  5. are there recommended steps I should take?
  6. what was your name? * In case you need to explain where you got advice later.
  7. thank you!


I’ve found that writing little scripts like that really helped my kids with their adulting conversations as they went through high school and into college. My daughter was very upset about the class, but it wasn’t relevant to her major so there was no point in discussing how or why the bad grade was happening.

Plan out what you’ve got to say, plot a path that your own emotional hot buttons, and gather the stuff that you can anticipate needing.

It’s a useful tool for managers as well. Tough conversations are part of the career. If you go in prepared, they are a little less tough.


  1. the company is making a change. * Just the facts.
  2. what’s the reasoning, quick outline of process. * Why is this happening.
  3. how does it impact this team. * Most positive spin possible.
  4. how does it impact you. * Simply your opinion of the reasoning and outcome, and how you came to accept that it was acceptable. If it’s not acceptable, save that for the separate communication where you announce your resignation.
  5. summarize: what’s happening, impact to this team, what should everyone do next.


If you’ve got lots of time to prepare, you might even think through some likely interactions, but that can backfire by helping you spiral back into emotional territory. The goal is to be able to communicate the facts and save your feelings for a different conversation.

Wednesday, August 28, 2019

Platform and Partners, Round Two

After reviewing this post on platforms and partnerships, there’s more to dig into. By definition, you can’t cross the Bill Gates line by yourself, but who should you be seeking partnership with? Developers who consult or consultants who develop? What tools should you build for them?

At the end of that article, I felt that free form coding was required. My reasoning is that the platform vendor cannot predict valuable use cases well enough to produce the toolkit that a consultant would need. This is not condemnation of the toolkit or the consultant. Rather, it is a recognition that high value jobs require deep linkage to the customer’s processes and data systems, meaning that they are complex and customer specific. This means you’ll need consulting service shops to achieve them, not development shops.

Consulting services partners only have linear contributions to your bottom line though. Managing and supporting them therefore needs to be a linear cost, and that implies keeping their toolkit minimalist and simple in nature.

The most elegant and efficient way to reach this state is to not provide a special toolkit to service partners at all; instead, partners work with the same toolkit that your own development teams use. Imagine a company in which every team’s functionality is available via service interfaces that designed to be eventually public. Such a company is not only using Conway’s Law for good, they are enabling partners by enabling themselves. This doesn’t eliminate partner-vendor squabbling, but it can keep the tenor focused on more easily resolved questions. It’s easier to answer “we want a bigger slice of these deals” (single variable, money) than “we want an easier and more flexible development toolkit” (what do easy and flexible even mean).

“APIs everywhere” as a partner service model also generates the maximum value for a development partner, who is now unconstrained. They may plugin to your stack anywhere and create value in any way. However, this is not an unalloyed good. Where many services partners are constrained to a single platform vendor (or at least a preferred vendor per use case), the development partner has a more flexible destiny. They are also more inclined to risk, since their business model rests on big upfront investments with uncertain but hopefully exponential rewards. If the platform vendor’s stack is completely open, a development vendor can easily subvert the vendor’s intention, and is far more likely to try it than a services partner. A few interesting examples: Elastic’s fight with AWS, AirBNB’s uneasy relationship with listing indexers, and Twitter’s on-again-off-again stance towards third parties. One might use an analogy: services partners for dependable, steady growth, development partners for high risk, potentially explosive growth. This can be a helpful model in deciding what vendors to support, but isn’t as helpful when deciding what toolkit to ship to them.

It’s worth picking apart the difference between technical support of a model and legal support of a model. Open APIs as a technical choice is a clearly beneficial system: internal and external teams are on the same footing, allowing maximal value to customers for minimal effort expenditure. The downsides of the model are in business risk. Remediation of that risk is a business problem, and the resolution is a partnership contract requirement and a technical enforcement via access keys. That’s obviously not an option for a fully open source system, but I can’t say I’d advise a fully open source approach to any platform business anyway.

Licensing models, self-service style

In my other two posts about licensing, I suggested that flat rate pricing is best for customers, but impossible in enterprise sales because of the variable and high costs of making a sale.

Those costs are difficult to understand if you haven’t been exposed before, but they are all too real. Weeks spent in negotiating a price are only the start; weeks spent in negotiating contract language are just a feature. What about indemnification? Can the vendor insure the customer against potential supply chain threats for the foreseeable future? It’s simply a matter of cost... and that insurance policy is now part of pricing.

What will happen to the deal if the vendor is purchased by another company? Can the customer audit the vendor’s source code? If the vendor goes insolvent, does the customer get to keep the source code? Yes, I have seen a customer organization running their own version of a formerly commercial product a decade after the vendor threw in the towel.

I was once involved in a contract between two industry titans that included a minimally disguised barter of services, and one of those services was sold to a third company as soon as the ink was dry. The cost to make and then keep that sale was... not small.

Even when it’s not titans you’re selling to... you can still be blocked on your ability to cross the competitive moat around enterprise software. If the thing you’re selling is close to the customer’s mission or has visibility across the customer’s entire org, they’re more likely to apply scrutiny and it’ll be harder to fudge compliance and legal details. The amount of blockage is directly tied to the amount of coverage or visibility your product will have for the customer. For instance, you might expect a gigantic financial customer to care greatly about indemnification, but they probably don’t for a specialist tool that gets quietly used by 25 people in the security ops center every now and then. Whereas if you’re selling something that sits in a mid-sized retailer’s cardholder data environment and manages the entire cashflow, they’ll probably care a lot more.

So as a vendor, there is a reasonable pressure to force your cost of sale down, and there is a clear goal: the almost zero cost clickwrap contract. Simply set your terms, disallow negotiation, and let the dollars roll in. It’s the ultimate expression of flat-rate pricing.

This is a fine approach for what I like to call lifestyle businesses: if you just need enough money for you and your cat to live happily, then sell away. The catch is that the most lucrative potential customers literally can’t buy from your business because of the potential risk. You’re probably good to go if your addressable market is consumers and your price fits on a credit card, but big business is off the table.

Wait! Singleton users and small teams buy in this model all the time! Expense report reimbursement is open to question, but no one cares if the price is low enough. A frustrated employee may just eat a few dollars for a productivity enhancing tool. The clickwrap model gets extremely blurry around personal computing appliances. I’m writing this in Bear on my iPhone, how is my employer to distinguish it from work I do with and for the company on the same device with the same app? (In my case, I use different editors for different roles.) Corporation and government legal departments try to draw a clear line, but IT struggles to implement that line and a clickwrap vendor is therefore always in danger of being pinched by changes in policy. Shadow IT is no place to make big money.

However, shadow IT does have some astounding success stories: Amazon Web Services is the obvious example, but Balsamiq, Basecamp, and Glitch (FKA Fog Creek) come to mind as well. If the official channels cannot support a use case and the need is great, then people will find a way.

Part four.

Sunday, August 25, 2019

Put PICA on Notable Events


For every notable event, the analyst adds a little PICA.

What’s a notable event? It’s a record that something happened, or an alert that something is expected to happen. It theoretically requires some form of response, from “read and move on” to “read and acknowledge” to “follow this run book” to “alert the [managers|Red Team|President] and [start the clock|increase logging|take cover]”. A notable event may be an Incident or Event in ITIL terms, a Ticket in bug tracker or fry cook terms, or simply grist data for a machine learning mill.

What is PICA? An acronym borrowed from the Dallas News by Clayton Christensen.
* Perspective: what is the importance of this event to the organization’s goals? Does it affect security posture? A service level objective? Is it a compliance breach?
* Insight: what is the cascade potential for the risk represented by this event? this event? Does it require immediate remediation or is it just a counter to be watched?
* Context: Is this event a one-off, or is it common? Is it more common for the grouping than the overall organization?
* Analysis: is this type of event occurring more or less frequently than in the past?

With a special incident, the statement is clearly true: The SAN is almost full. My perspective tells me that systems are going to stop working, and my insight into those systems lets me understand knock-on events across my organization. I know the context, why we need these systems to fulfill our mission and why that is important, and I use my analytical skills to determine a course of action.

However, every firewall rule triggered alert in a SOC or breakfast ticket in a diner does not immediately require a great deal of insight. As a developer, I see your low-impact typo ticket and I fix the bug.

There is still a need for PICA on these low-or-no impact notable events. Perspective: they still consume human attention, wasting the most expensive resource in the environment.  Insight: this kind of alert is ripe for automation, and a fine place to use a machine learning algorithm. Context: Reducing the flow of useless alerts makes important ones stand out better. Analysis: cost-benefit calculation suggests spending this much time to eliminate that noise.

Managing the Unmanageable

I’ve been thinking off and on about containers (FKA partitions, zones, jails, virtualized apps) and mobile ecosystems for a few years. These technologies have gone through several iterations, and different implementations have different goals, but there is an overlap in the currently extant and growing versions. Hold containers, IOS/Android, and MDM-plus-AppStore enabled laptops together and look at the middle of the diagram: 1: management is done in the surrounding systems, not in the daily use artifact. 2: management needs are minimized by simplicity.

A container is built, run, and deleted. There is no “manage”. To change or fix it, you go upstream in the process. A phone app may be installed or uninstalled, but it will take care of updating itself from someone else’s activities upstream in the process, just like a container. Users and admins don’t patch them, instead vendors push updated versions into an infrastructure that automatically does the needful. Even the infrastructure around the app or container, firewall policies, routing policies, device controls, all the policies and configuration that make the system secure and effective are also managed centrally and pushed into place.

This vision of abstracted management has attractions from many perspectives, which are obvious enough that I won’t waste time repeating them. It is also frustrating to teams tasked with monitoring and managing to existing standards of compliance. The new model is for computing appliances and services, and does not fit well with the current model of managing general purpose operating systems. It’s arguable if the computing appliance model can apply to general purpose computers at all; it’s theoretically possible to lock one down sufficiently but the result isn’t better than a mobile device. This attempt failed in the BYOD (Bring Your Own Device) laptop cycle, but the idea of being able to add and remove “appliance mode” on a general purpose device hasn’t died and only time will tell. BYOD seems to be working just great for phones, after all.

The power of systems management tools comes from the philosophy of the general purpose operating system. Programs run with each other in a shared environment which fosters their working together to serve one or many users. Users, including administrators, can remotely do whatever they need via networking. In the primordial slime of the business opportunity called systems management, administrators would use remote shells to script their desires into being, pulling packages into place when needed. Much has changed, but the fundamentals of these tools remain the same: a remote program with privileges, command and control networking, and a file movement tool.

The new model does not allow these fundamentals. We aren’t running as root in the remote host anymore. While mobile and laptop systems retain broader abilities, in the strictest container models even communication and files are only allowed to come from one place. There are exceptions as a matter of theory, but organizations who embrace the philosophy are going to prefer blocking those exceptions. And they will be right, because running visibility and control agent programs in a container or a mobile app sucks. Not only does it increase the weight and computational complexity of the target, it does so for no good reason; the fabric and philosophy of the new model are designed to prevent anything useful being done from this vantage point. Your process is not suppose to worry about other processes. As a user, you’re supposed to worry about your service fulfilling its purpose, not management functions.

This philosophy is not a comfort to compliance auditors, some infosec teams, or traditional systems administrators (hi, BOFH and PFY). It sounds too much like developers sitting in an ivory tower and announcing that they have handled everything just fine, a priori. Even if they say “devops” and “SRE” a lot. But at the end of the day, organizations are regularly accepting a similar statement from their everything as a service vendors, and not many can fully resist the new model’s siren song. But, a new computing model is not able to ignore law, finance, and customary process. The result is a grudging middle ground of management APIs, allowing a minimum viable level of visibility and control into the new model.

These APIs do not restore management fundamentals; they only allow you to log, to measure states, and to initiate change within the new model’s parameters. Posit that breaking the new model rules is going to fail, immediately or eventually. A management vendor is therefore in a jail cell, and has to differentiate from inside when offering visibility and control for computing appliances. Windows CE was the last gasp of general purpose operating systems for appliance friendly use cases (Linux may appear to be an exception, but the deployed instances used in appliances are hardly sporting full Unix shells). From here out, endpoints are full general purpose machines, a mobile approach, or a handful of frozen kiosk and VDI images. Servers are a mass of general purpose machines, mostly on virtualization, sometimes delivered as a service, with an explosively growing segment of service oriented app virtualization.

A new type of management agent is born for these API-driven appliance models. Maybe it’s implemented in “sidecar” containers or as “MDM approved” apps, or maybe it lives fully in the cloud, maybe it’s the focus of a new vendor or the side project of an established one. There will certainly be pronouncements that it brings new value to the use case. Doesn’t matter how it’s implemented or marketed though, it’s accessing the same APIs as everyone else. Its best efforts are limited to “me-too”. Differentiation is either in costly and difficult up-stack integration, or a capital-burning race to open sourced commoditization.

A customer who wants single pane of glass visibility, is left with few options: build their own analytics, invest in data lake technologies, or buy extensions to their main management tools. Almost all select two of the three for resilience.

It may make an unpleasant experience for the management tool, where this ghost of management is fit into the same console and mental model as a full-powered vendor’s real capabilities. “Here is your domain, in which you can do what is needed to ensure your organization’s mission! Except on these special systems where you know a lot less and can’t do much of anything.” Customer expectations are sort of hit but kind of missed, and no one is very happy. Some vendors can sell “know less and do less” alongside “full visibility and control” for the same price. Others may adjust the license model instead.

So, is the single pane of glass worth a cognitively dissonant user experience? Or does the customer split their visibility and control tools and buy something else to glue things back together, moving that dissonance higher up the stack? Because there will surely be dissonance when clicking for action in tool A has to go through tool B’s brokerage into Tool C for execution.

There is a useful comparison to minority or legacy operating systems. Management and visibility tools universally reduce their capabilities on platforms that aren’t as important to their customers, so very few are excellent on Solaris, AIX, or HP/UX. The important difference is that a vendor’s reduced AIX capabilities are a matter of choice. If the market demanded, the vendor could eventually resolve the problem. A management vendor cannot change the operating model of an entire ecosystem, so computing appliances are not like legacy computing. But there is an analogy in that the tools do not align perfectly with customer needs, leaving gaps to fill with people and process.

If we imagine a perfectly amazing management tool for AIX that doesn’t integrate with the tools used for Linux and Windows, the choice becomes clearer. Customers don’t require visibility and control for operating systems or computing models, but rather for business functions and services. Buying different tools for different systems can be a required stop gap, but it’s not a goal in itself. Therefore, a single product, single pane of glass approach wins over a multi-product, best of breed approach. The remaining question is therefore one of approach: do you use an endpoint-centric vendor that was born from visibility and control, or a data-centric vendor that was born from searching and correlation? The answer lies in your organization’s willingness to supplement tools with labor. A data lake can have great visibility, but it has no native control, meaning another gap to cross before even hitting the API gaps in the new computing model.

The goal of the new model is to minimize and ultimately remove management entirely. As long as it is unsuccessful in this goal, there will be rough edges between the new model and the old. Those edges bias towards the old model consuming the new.