Friday, November 23, 2018

Growing the Company

Recent conversations on going public have reminded me that some assume taking a company public is inherently, completely good, necessary to being Important in the Industry. Here’s a few reasons why that is not always true, noting that I am not a financial professional.

Posit that the natural course of a successful company is to achieve a monopoly. Oligopoly will do in a pinch, but the ideal scenario for a company is to take all the cookies.  This is generally viewed as a bad thing, so societies might pass laws or enact breakups to prevent it.

That needs a government actor, and current theory holds that government is bad. One should create market forces that enable good outcomes via greed and invisible fairy hands. To the degree that theory admits monopolies are bad, public markets seem to be the anti-monopoly agent.

Public markets love growth. Vastly oversimplified, there are two types of investments: safety and growth. Bonds and stocks. Monopoly provides safety, whether through bonds or dividends, but it has no growth. A startup provides growth opportunities, but it is not safe.

As an individual investor or fund, this is all fine. Select the balance of safety and risk that makes sense for your goals, and all will be well. As long as there are opportunities. But, if companies achieve their goals, there will just be a few safe monopolies and no growth.

Now let’s play Sim Captain of Wall Street and manage the balance of safety and growth opportunities. The first lever you might try is merger and acquisition. Encourage the monopolies to buy each other and form massive conglomerates with a few basic shared functions.

The outcome is socially fascinating, in that it appears to have encouraged the growth of functional careers like project management. Abstracting a role across the units of Buy-N-Large is good prep for considering that role as an abstract function for any organization.

However, it’s tough to argue that the resulting conglomerates have become growth investments. Jamming a bunch of unrelated businesses into a holding entity doesn’t increase productivity.

A more cynical lever exists in the tech industry: encourage the monopolies to self-disrupt. If a company jumps in a new direction, one of two things will happen: succeed and produce new growth for themselves, or fail and produce new growth opportunities for other companies.

Once initial investments are recovered, there’s almost no way to lose in encouraging a mature, successful company to try crazy risks.

Looking at this as Sim Company Leader, I don’t see how farming the market to increase growth helps me get monopoly. It’s great to get windfall money and lower interest loans, but I don’t want to lose control. I may not have a choice though: early investors expect their paydays.

What if I could table flip the market though? It would be distracting to the attain monopoly game... but after going public, I might be in a mood to gamble on an acquisition or a new product architecture.

It’s all fun and games until someone loses their job, but this cycle, if it’s real, creates higher opportunity jobs by creating duplicative roles across many smaller companies. Not so many gold watch careers though.

Tweetise

Sunday, November 18, 2018

DURSLEy and CAPS

Monitoring and metrics! Theoretically any system that a human cares about could be monitored with these four patterns:

  • LETS
  • USE
  • RED 
  • SLED can’t find where I saw this now, but it’s the same stuff.

I’m hardly the first to notice there’s overlap... https://medium.com/devopslinks/how-to-monitor-the-sre-golden-signals-1391cadc7524 is a good starting point to read from. I haven’t seen these compressed to a single metric set yet, probably from not looking hard enough. Or because “DURSLEy” is too dumb for real pros.


  • Duration: How long are things taking to complete?
  • Utilization: How many resources are used?
  • Rate: How many things are happening now?
  • Saturation: How many resources are left?
  • Latency: How long do things wait to start?
  • Errors: Are there known problems?
  • Yes: We’re done

These are popular metrics to monitor because they can be easily built up from existing sensors. They provide functional details of a service, in data that is fairly easy to derive information from.

In an ideal world, those metrics are measuring “things” and “resources” that are directly applicable to the business need. Sales made. Units produced.

In a less ideal world, machine readable metrics are often used as a proxy to value, because they are easier to measure. CPU load consumed. Amount of traffic routed.

In the best of all possible worlds, the report writer is working directly with business objectives. CAPS  is a metric set that uses business level input to provide success indicators of a service, producing knowledge and wisdom from data and information.


  • Capacity: How much can we do for customers now?
  • Availability: Can customers use the system now?
  • Performance: Are customers getting a good experience now?
  • Scalability: How many more customers could we handle now?

These metrics present the highest value to the organization, particularly when they can be tied to insight about root cause and remediation. That is notably not easy to do, but far more valuable than yet another CPU metric.

Report writers can build meaningful KPIs and SLOs from CAPS metrics. KPIs and SLOs built from DURSLEy metrics are also useful, but they have to be used as abstractions of the organization’s actual mission.

Examples: the number of tents deployed to a disaster area is a CAPS metric, but any measure of resources consumed by deploying those tents is a DURSLEy metric. Synthetic transactions showing ordering is possible: CAPS. Load metrics showing all components are idle: DURSLEy.

Tweetise

Saturday, November 10, 2018

Licensing thoughts, round two


Tweetise.

License Models Suck got a lot of interesting conversations started, time to revisit from the customer’s perspective. Let’s also be clear, this is enterprise sales with account reps and engineers: self-service models are for another day.

As a vendor, the options I describe seem clearly different; but as a customer I just want to buy the thing I need at a price that works. “Works” here means “fits in the budget for that function” and “costs less than building it myself or buying it elsewhere”.

A price model has to work when growth or decline happen.  As a customer I build a spreadsheet model to find if the deal would quit working under some reasonably likely future scenarios. If it passes that analysis, fine. I don’t care if the model is good or bad for the vendor.

So, the obvious question: why doesn’t flat rate pricing rule the world? It’s certainly the easiest thing to model and describe! Answer: organizations are internally subdivided.

The customer may work at BigCo, and BigCo may use some of the vendor’s products, but the customer doesn’t need to buy for all of BigCo. They need to solve the problem in front of them. Charging them a flat BigCo price for that problem doesn’t work.

What’s more, the customer can’t do anything to make it work. Maybe they can help the sales team pivot this into a top-down BigCo-wide deal, but that’s going to take a long time and require all sorts of political capital and organizational skill that not every customer has.

This is easy to solve, right? Per-unit pricing is the answer! Only, we’re talking enterprise sales and products that require hand-holding. The vendor has a spreadsheet model too, and that model doesn’t work if a sales team isn’t producing enough revenue per transaction.

If the customer’s project isn’t big enough, then the deal won’t work with per-unit pricing. In response, the vendor will drop deals that are too small, set minimum deal size floors for their products, or make product bundles that force larger purchases.

If the customer has no control over the number of units, a per unit price might as well be a flat rate. There’s no natural price elasticity, and the only way to construct a deal is through discounting.

Why not get unnatural then? Just scale the price into bands! You want 10 of these? That’s $10,000 each. You want 10,000 of these? That’s $10 each. Why not sell the customer what they want?

Because the cost to execute a deal and support a customer is variable and difficult to model, and the more complex a pricing model is, the less clarity your have into whether your business is profitable and healthy.

The knock on effects from that non-clarity are profound, because they affect anything that involves planning for the future. It’s more difficult to raise capital or get loans, negotiate partnerships, hire and retain talent.

And so we mostly see fairly simple pricing systems in mid-sized enterprise software vendors. I’m most familiar with “platform with a unit price, less expensive add-ons locked to the same unit quantity.”

This pricing works for the middle of the bell curve, but small customers are underserved while large customers negotiate massive discounts or all-you-can-eat agreements that can hurt the vendor.

Sunday, October 28, 2018

Phases of Data Modeling

Say that you want to use some data to answer a question. You’ve got a firewall, it’s emitting logs, and you make a dashboard in your logging tool to show its status. Maybe even alert when something bad happens. You’ve worked with this firewall tech for a few years and you’re pretty familiar with it.

You’ve built a tool at Phase 1. A subject matter expert with data can use pretty much anything to be successful at Phase 1. That dashboard may not make a lot of sense to anyone else, but it works for you because you’ve seen that when the top right panel turns red, the firewall is close to crashing. You know that the middle left panel is a boring counter of failed attackers, while the middle right panel is bad news if it goes above 3.

One day your team gets a new member who’s interested in firewalls and they start asking questions. You improve the dashboard in response to their questions, and other teams start to notice. Some more improvements and you can share your dashboard with the community. Maybe it gets you a talk at a conference. This is a Phase 2 tool. People don’t need to know as much as you do about that firewall to get value from your dashboard.

So far so good... but now you start to get some tougher questions. “Can I use this in my SIEM?” Or “can you do the same thing for this other firewall?” Now you’re getting asked to put this data into a common information model.

This is a Phase 3 problem. Simply understand the data sources and use cases well enough to describe a minimalist abstraction layer between them. There is some good news here, because Phase 3 tools are hard to do and therefore worth money. Why? Well, let’s look at the process:

1. Read the information model of the logging or security product in question and understand what it’s looking for. There’s no point in modeling data it can’t use.
2. Find events in your data that line up with the events that the product can understand. Make sure they’re presenting all of the fields necessary, figure out how you’ll deal with any gaps, and describe the events properly.
3. Test that it works, then start over with the next event. Continue until you’ve gotten everything the model covers now.
4. Decide if it’s worth it and/or possible to extend the model and build the rest of the possible use cases.
5. Decide if it’s worth rethinking your Phase 1 and Phase 2 problems in light of the Phase 3 work (probably not).

This is tedious work that requires some domain knowledge. That doesn’t mean you should wait until the domain knowledgeable wizard comes along... domain knowledge is gained through trial and error. Try to build this thing! When it doesn’t work, you can use this framework to find and fix the problem.

Let’s also consider a common product design mistake. When using this perspective, it’s easy to think that the phases are a progression through levels, like apprentice to journeyman to master. Instead, these phases are mental modes that a given user might switch between several times in a working session.

I’m fairly proficient with data modeling, but that doesn’t make me a master of every use case that might need modeled data. An incident response security analyst may be amazing at detecting malicious behavior in the logs of an infrastructure device, but that doesn’t mean they actually understand what the affected device does.

This distinction is important when product designs put artificial barriers between phases of use, preventing the analyst from accessing help they need in the places they need it, or preventing them from moving beyond help they don’t need. More on product design next week.

Not a tweetise, just a link

Sunday, September 30, 2018

Weekly Status

Tweetise

People are creatures of habit, and effective work is produced by grooming useful habits. Here’s a quick write up of a useful habit: the weekly status report.

I haven’t always written these, and I haven’t always worked for people who’ve wanted to receive them, but I’ve been at my most effective when I was writing and discussing them.

A weekly report of your status is a distillation of the most important things that have happened in the last few days. It’s also an agenda for the next week, and a chance to reflect. It can also help you actually have a weekend, because you’re closing the books on Friday.

How to work this magic? You’ll need a text editor. I’m also fond of a cloud service for syncing text documents. You’ll need a communication tool too: email, slack, or a wiki.

The document: a simple text document with no formatting.

Hi,

Meta:
* 1 line about you. Happy? Sick? Overworked?

$project:
* 1-3 single line statements of status affecting events.
* Started X
* Y Ongoing
* Finished Z
* Last release, date, purpose
* Next release, ETA, purpose
* The goal after that

*Repeat as needed.*

Thanks,
$me

Every Friday when I’m about ready to call the day done, I open this document and replace last week’s material with this week’s. I reflect on how I’m doing and how that presents. Same items not moving? Can’t stand looking at this any more? I need help and this is my chance to ask.

Sync: If it’s possible to put this text block in a cloud sync service, then it’s possible to do this on your phone while riding to the airport or standing in the boarding line. That’s remarkably useful. The big thing is to see what you wrote last week.

A push based communication is ideal, because the recipients aren’t going to look at a web page. They’re all too used to safe and boring status, so don’t be boring. Email or Slack work. Skip the formatting and pictures. Just the status.

I’ve been in teams that used wikis or Evernote for status updates, and it can work, but it’s notably worse; those are the teams where a lot more phone calls were needed. There’s a reason those tools all send email notifications.

Finally, who to send your status to? Your manager is supposed to be thrilled to get a concise, timely, and accurate ping of status. However, folks sometimes fall short of ideals, and that doesn’t have to stop you from doing this work for yourself.

Given sufficient tuning and need, the weekly status can go to your teammates, your direct reports, or a cross-functional group. I do think it’s important to send it to someone, otherwise it’s a diary. But as in any writing, think of the audience.

Sunday, September 23, 2018

Community

Tweetise.

So you’re a software company, and you want to have a community. What next?

“Why community” is a great place to start: the stated reasons and budget are often somewhere in marketing, but the community is equally important for customer support. Community is where soft guidelines are communicated, FAQs are advertised, and newcomers are made welcome.

All of that means reduced customer support costs, because the folks that are answering these questions aren’t on your payroll. Note that also means you don’t have a lot of control over what they say, so we’ll dig into that in a bit.

A software community is a forum for discussions about your software and the problems that it solves. This may take many forms, non-exclusively. Asynchronous email lists (Mailman) and fora (Lithium). Synchronous channels like Slack, or face-to-face user groups and conferences.

In an ideal world these are all options on the table, but there’s a very definite cost gradient to consider. The more synchronous you get, the more it costs for fewer people; but they get better results. Support may be a major beneficiary, but they have no budget power.

Marketing is the team paying for this if anyone does, so the dollars are entirely dependent on the community’s ability to meet marketing’s agenda. That can be an issue for the types of folks who offer free support for someone else’s software.

Who are those community members, anyway? They are wonderful gems. Customers, pro service partners, maybe internal employees who just can’t get enough. They’re putting “spare time” into your support forum because they care about people being successful, with your product.

They’re also doing work for themselves, building a community reputation. They’re the pool you’ll hire from as you grow. In the meantime, are you offering them a path to stay with you? Certifications? Awards? Where’s the public recognition of their effort?

Unfortunately, people are people and those nobly motivated activities might get blurred by bad behavior. While solving your problems, your community may also air views on race, sex, religion, politics. Fights happen. Do you even know, and are you prepared to keep the peace?

Moderation is absolutely required if you don’t want your community to turn into a cesspool. And so we return to the question of budget. Moderation means people, and people gotta eat, and quality people expect quality pay and tools for their job.

At a tiny scale, your company is able to do this work “on the side”. Just like the social engineering of people and project management, your star employees quietly shoulder it all while you congratulate yourself on not actually needing those functions.

Don’t kid yourself; there’s someone taking care of the social work you’re not seeing, and you’d better recognize their contribution before it stops. Keeping people working well together doesn’t just happen.

At a massive scale, there’s so much moderation and so much community that tiny and medium communities are forming around the main communities. If you’re getting a B-Sides, you’ve got a whole new set of problems.

The medium sized scale is where things are toughest. Big enough to truly need part-time or full-time paid help, but small enough to question that need and try to half-ass it. So, for those in that boat, let’s consider what a successful community looks like.

New users are welcomed & their problems are answered correctly. People are free to be themselves, but bigotry and bullying are not tolerated. Thorny problems get redirected to proper channels. Fights are resolved promptly without collateral damage.

The stars of the community are recognized and rewarded, regardless of where their paychecks originate. They keep magnifying your reach because they’re feeling good about doing that.

If that doesn’t sound like your community, you might be better off shutting it down until you hire someone to do it right. Buying tools isn’t going to help.

Sunday, September 16, 2018

Security Logging

Tweetise form.

Security logging is interesting. Detecting security and compliance issues means uncovering nasty little leakages of unintentional or surprising information all over. When you add a powerful security tool to the environment, it starts to shine light into dark corners.

No one expects that temporary file of sensitive data or the password in a script to be recorded. Credential tokens start safe, but get copied to unsafe paths. They’re not intentional flaws, but rather hygiene issues.

If a tool detects security hygiene issues, the responding team must decide if they believe the tool or not, and then what to do about it. As a vendor planning that security tool, figuring out which way the customer team will go is an existential crisis.

Obviously, if the customer doesn’t believe the tool, that sale isn’t made or that renewal doesn’t happen. Less obviously, even if the customer does believe the tool, success is not guaranteed. The social angles are too complex for today’s thread.

The logical path for tool developers is to log any data, offending or otherwise.
It’s impossible to describe every possible problem scenario & filter objectionable material. Even getting low hanging fruit is bad, it builds an expectation that the tool solves hard problems too.

Worse, if the tool does not record the raw data and only records that a user did a prohibited thing at place and time... then the tool won’t be trusted. The user doesn’t remember doing a bad thing, and now it’s human versus log. Human wins.

So financial pressure leads to security tools logging everything they see. This is not ideal because it can mean worsening the security situation by logging and transmitting secure tidbits. Instead of searching every mattress in town, our raccoon-masked baddie can rob the bank.

Because belief is ahead of action in the customer’s decision path, data collection problems are true of failing security tools as well as successful ones. Everyone wants to be trusted, so everyone records at high fidelity.

Encrypt all the things is then used to protect these high value stores. I’m reminded of the DRM problem though... the data has to be in usable form to get used, so there’s always an exposure somewhere. Makes you wonder how many SOCs have extra folks listening in.