Sunday, September 16, 2018

Security Logging

Tweetise form.

Security logging is interesting. Detecting security and compliance issues means uncovering nasty little leakages of unintentional or surprising information all over. When you add a powerful security tool to the environment, it starts to shine light into dark corners.

No one expects that temporary file of sensitive data or the password in a script to be recorded. Credential tokens start safe, but get copied to unsafe paths. They’re not intentional flaws, but rather hygiene issues.

If a tool detects security hygiene issues, the responding team must decide if they believe the tool or not, and then what to do about it. As a vendor planning that security tool, figuring out which way the customer team will go is an existential crisis.

Obviously, if the customer doesn’t believe the tool, that sale isn’t made or that renewal doesn’t happen. Less obviously, even if the customer does believe the tool, success is not guaranteed. The social angles are too complex for today’s thread.

The logical path for tool developers is to log any data, offending or otherwise.
It’s impossible to describe every possible problem scenario & filter objectionable material. Even getting low hanging fruit is bad, it builds an expectation that the tool solves hard problems too.

Worse, if the tool does not record the raw data and only records that a user did a prohibited thing at place and time... then the tool won’t be trusted. The user doesn’t remember doing a bad thing, and now it’s human versus log. Human wins.

So financial pressure leads to security tools logging everything they see. This is not ideal because it can mean worsening the security situation by logging and transmitting secure tidbits. Instead of searching every mattress in town, our raccoon-masked baddie can rob the bank.

Because belief is ahead of action in the customer’s decision path, data collection problems are true of failing security tools as well as successful ones. Everyone wants to be trusted, so everyone records at high fidelity.

Encrypt all the things is then used to protect these high value stores. I’m reminded of the DRM problem though... the data has to be in usable form to get used, so there’s always an exposure somewhere. Makes you wonder how many SOCs have extra folks listening in.

Sunday, September 9, 2018

Disrupting Ourselves

Tweetise here

Let’s talk about some received wisdom: “disrupt your own market before someone else does it to you”. Sensible advice: complacency can kill. Except disruption is generally a pioneering activity, and the survival rate for pioneers is lower than for copycats.

Corporate blindspots being what they are, this style of transition is more often a new company’s opportunity to disrupt an existing market. When done internally, it’s as disruptive as calving a new company.

Still, let’s assume our company has decided to change. Further assume that we’re not completely altering business model from vertical integration to horizontal commoditization or vice versa. That takes executive team guidance, but I generally write about technology companies.

There are many architects with opinions on horizontal versus vertical technology stacks. Worse, they win budget to shift the stack under the rubric of self-disruption. Horizontal and vertical both work, so a team can start anywhere on the cycle and shift to the next step.


Moving from vertical to horizontal:
* Identify functional components
* Abstract those components with APIs
* Replace the ones that can’t elastically scale
* Start writing large checks to your IaaS of choice

That’s all fairly straightforward for a new project, but if you’ve got an existing customer base there’s some challenges.
* Maintain performance and quality while complicating architecture
* Decide to expose or hide the APIs… Who’s embracing and extending who?

Worst of all:
* Does the license and business model still work after this change, or do you need to revisit product market fit?
* Backwards compatibility... well if you’re not Microsoft, let’s all have a good laugh over that one.

Moving from horizontal to vertical:
* Identify painful integrations that need consolidating.
* Define interfaces where your solution will tie into the rest of the world.
* Execute. Ease of purchase, use, and assurance. Buyer must feel confident they didn’t make a mistake here.

There’s no lack of startup memoirs. Doing it from within a company is gnarlier, disrupting your own existing system. Professional services and the partner community are going to ask some tough questions. Sales and marketing might not be thrilled about rewriting the playbook.

Transition is reimplementation of capabilities, meaning forward progress slows or halts for at least a year. Strong support in a fat Q2 evaporates in the following lean Q1. Teams that mismanage their planning find their work going into the bitbucket, along with some executives.

To forestall that reckoning, leadership spends significant effort badmouthing existing product: hopelessly outdated, unscalable, and just bad. This is easy and successful; and therefore the worst damage of the entire process. It burns the boats and commits the company.

Once “Something must be done” is accepted wisdom, all manner of crazy can be considered reasonable. Add some sunk costs and it takes a major crisis to reset direction.

Monday, September 3, 2018

Engines and fuel - who writes quality content?

Tweetise.

In software, everyone wants to build engines, and no one wants to make fuel. A platform for executing content has high potential leverage and lots of vendors make those. The expected community of fuel makers rarely materializes.

Content for software engines breaks down along two axes: simplicity versus complexity and generality versus specificity to the customer’s environment. Half of the resulting quadrant is unsuitable for sharing communities, because it’s not general.

Simple and customer specific: a list of assets and identities. Vendors clearly can’t do these at all, so they make management tools. This quadrant is an obvious dead zone for content.

Complex and customer specific: personnel onboard and termination processes. Again, dead zone.

Sad times occur when companies try to operate in one of the dead zones: for example, the process automation engine. A hypothetical vendor faces years of customer issues root caused to process failures, so they decide to help customers succeed by automating the process.

Turns out that the customers who think about process already have 20 of these on the shelf. The customers who don’t? Some aren’t interested, and some want to be told what they should be doing. They need fuel, and the vendor can’t give it to them without professional services.

Complex and general: compliance tests for common off the shelf (COTS) solutions. This is where in-house content teams are justified; their success is measured in lower sales cycle times and professional services spend. Those metrics are hard to defend, but that’s another story.

Compliance auditing is an excellent place to observe this type of content in the market. Anything that can execute script on an endpoint and return results can be used to check compliance with (say) the PCI dirty dozen.

You’d be mad to do this with a tool that doesn’t already have some content though. Who wants to maintain the scripts to determine if all your endpoint agents are getting their updates? So customer demand for content is high.

The supply side is more challenging. The engine vendor makes some content because they need it, but the work is harder than it appears and they’re eager to share. Why can’t they? There’s four paths they might try.

1: Pays their own developers to write content. Mostly gets what they want at high cost. 2: Pays others to write it, through outsourcing or spiffs. Mostly gets less than they want at high cost. 3: Markets a partner program and SDK. Mostly gets nothing, at low cost.

4: They do nothing and hope for the best. In a strong social community, this can actually work great. Participants learn new skills, improve their employability and social standing, and genuinely make the world a bit better. Without that community, vendor had better pay up.

The strongest motivation to make complex content for an engine to execute is if you own that engine or make a living with it. Next is if you improve your individual standing with this work. The weakest motivation is held by other software companies seeking marketing synergy.

Which brings us to the last quadrant, simple and general: a malicious IP address blacklist. This is where entire companies are justified through attractive-looking profit margins; there success can be measured in the usual metrics of sales.

The threat intelligence market is a recent example of this effect. TI comes from four sources: internal teams, paid vendors, private interest communities, and open source communities. In the first three, employees produce quality content for real world use cases.

Ask your friendly neighborhood security expert which TI sources they prefer, and I expect the answer will look like @sroberts’: https://medium.com/ctisc/intelligence-collection-priorities-10cd4c3e1b9d

Taking responsibility for TI content leads to increasing risk avoidance as well, further reducing its value. Over time the developer facing a flood of support tickets will err on the side of caution, accept more false positives, and add caveat emptor warnings.

Another interesting factor in these models is the mean time to maintenance. Threat intel needs analysis of fast-moving criminal groups and rapidly decays in value. Compliance content relies on analysis of slow-moving components and can last for years of low maintenance costs.

I think that this dichotomy in maintenance cost holds across most examples in the simple to complex axis. Connectivity drivers are complex and last for a long time. VM or container definitions are simple wrappers around complex content and last for a short time.

The requirement for maintenance defines whether the vendor offers support for the content, which in turn defines many customers’ willingness to depend on it and consultants’ willingness to provide it.

Playing it out, higher maintenance content is less supported, and therefore more likely to be used by risk-embracing customers with strong internal teams and bias towards community software. Lower maintenance, higher support, more services dollars.

Sunday, August 26, 2018

Line Product Management Process

Tweetise (thanks @djpiebob!)

I have some issues with the concept of “automating” or “scaling” product management, which I went into in this blog post: http://www.monkeynoodle.org/2018/03/automating-ers-through-support-is-crap.html — what I haven’t written up is what I do use.

This is the process for directly running a product or multiple products; leading a team that runs products has a different set of tools required which I’ll go into some other time.

It’s pretty old school! I use whatever is available for shared documents,  [Confluence|Wiki|Google docs], to keep a freeform record of customer contacts. During a meeting I take notes on my phone (Apple Notes) or my laptop (BBEdit), depending on the need to avoid keyboard noise.

ASAP after the meeting I rewrite them into the shared doc and share the rewritten result with interested Slack teams. I’ve also tried SFDC call notes and direct to JIRA, but found it impossible to correlate and review across customers and projects.

The first raw notes document is a mess of shorthand, repetitions, action items and factoids for me, and acronyms that may only make sense to me. The second is still just notes, but more readable by other team members. This is the citations bibliography for everything else.

I might use it for competitive research as well, or I might put competitive notes in a separate document if they get too big. I‘ll also break the customer notes doc off into new docs every X pages or Y months, which can be useful for seeing changes in the market requirements.

I regularly re-read those notes and research items, looking for common threads, opportunities, and leverage points. I start copying these into a summary at the top of the shared notes document, and I use them to produce more structured requirements docs.

I need a Market Requirements Document (MRD): What is the problem, what industries are affected, how much money is available, what are the titles and responsibilities of the Champions, Gatekeepers, and Buyers?

I need a Product Requirements Document (PRD): What would we build for that market? What features would it need, and who would those features serve? I usually write up enough high level features for two or three major releases before I start trying to decide what might get done.

For a small project I‘ll combine the MRD and PRD. The PRD will be used to produce JIRA epics and stories. This means rewriting and converting from tool to tool, which means doing the creative work of refining, sifting, correlating, synthesizing, and sorting ideas.

The development team is introduced to these drafts as well, and we start to refine them together. Whiteboards, wireframes, and flowcharts start happening here. Maybe some prototype code.

I rewrite the epics and stories of the PRD in greater detail every time I touch them. I also clone them, move them from story to epic, throw them away and start over. Tickets are free, roadmaps are predictive estimates, and the backlog is a playground.

Change tracking, prioritization, progress reporting, workload sizing, and release estimation are driven from JIRA data, often processed in Splunk or Excel for presentation in [PowerPoint|Keynote].

Idea accountability and closing the loop with customers is not tracked in JIRA. That’s my responsibility to take care of, which I do by reviewing the customer notes document whenever I have contact with the customer or their sales team.

The system I suggest requires a lot of work. The PM must open themselves to as many sources of input as possible and work to reduce the firehose to sensible, high-leverage ideas for engineering to implement.

Centralization is critical so that the PM’s work is visible and can be taken over by another PM. Some sort of tool helps, but the specific tool chosen doesn’t matter as long as it doesn’t get in the way. The more workflow a tool suggests, the more it’s going to get in the way.

Moving ideas from tool to tool at each stage is actually very helpful. Putting a technical barrier between input and output that requires human brainpower to push things through is analogous to transcribing from longhand notes to an essay in a text editor.

People are excellent at doing all sorts of creative work, but they’re also excellent at avoiding work and justifying results. Getting work done requires forming useful habits, and critically rewriting your own work is one of those.

There are a number of complaints that come up with this conversation, which I synthesize to “that process can’t scale!” As I understand the argument: “As a PM I want to offload portions of the workflow to an automatic system or a process that other teams do so that I can do more”.

Or the more pernicious: “As a PM I want to point other people at automated systems so that they don’t have to interact with me to get what they want”. As an introvert, I do sympathize with this position, but not very much, so let’s drop that one.

The work of doing product management is not automation friendly. Software is eating the world, and as product managers we are the chefs preparing that meal. It’s only natural to look at our own job, see a process and think “that can be automated too!”

It’s not true though, because software can only eat the things that are expressed in numbers without losing context. The computer can’t understand context. People have to do it, so the product opportunity is in personal productivity tools, not team aids.

Handling scale as a PM means managing the number and scope of projects, changing the balance of anecdata and metrics, and avoiding all the easy work that blocks this process with a false feeling of accomplishment.

Saturday, August 25, 2018

English degree, Tech Career

Also tweeted.

What is the career value of an English degree in a technology career? 

I graduated from UC Berkeley with a degree in English Literature, focus on American poetry. My thesis was on Emily Dickinson. I’ve been working in information technology ever since. So I’m biased on this subject.

I’m hardly the only person with this kind of career path, and I realize how lucky I’ve been. I didn’t always, though. I faceplanted on an interview softball about my education several years ago.

I was interviewing with a rather prestigious company who was riding an amazing wave. They’d recruited me, so I was feeling good. Then: “Tell me about your English degree” and I started digging a hole. I had unwittingly internalized the view of humanities as useless.

Lesson 0: Have something positive to say about every word in your resume. Even if it’s something that your industry stereotypes.

Now hopefully less stupid, I have some thoughts about what the degree has done for me. The English degree taught me to read critically, synthesize information, and write clearly. I use these skills all day, every day.

In the classical education paradigm, this was called Logic and Rhetoric. (https://en.wikipedia.org/wiki/Trivium). @ckindel has posted an excellent update of this mental toolbox here http://ceklog.kindel.com/2018/07/08/tools-to-achieve-clarity-of-thought/ (the linked articles are all worthwhile).

There are two power tools learned in the English degree that are not directly discussed in that: academic papers, and poetry.

Economic expression of ideas in standard persuasive forms is key to good writing. An academic paper’s standard form provides two leverage points. It helps you write. Writer’s block is defeated by words on paper, and the form gives you words, showing the gaps that remain. 

Form helps the reader accelerate. Look at the humble 5 paragraph essay. Thesis, three arguments, conclusion. Tell ‘em, show ‘em, tell ‘em again. A skilled reader processes this in seconds, while a less structured rant is a more challenging experience. 

Academic papers also ask the author to focus on quality. Because each sentence will be questioned, each sentence must carry its weight. The Twitter editor adds a similar value to one’s writing.

In a 10 page thesis or a 100 page dissertation, a product requirements document, or an engineering design discussion, writing has a job and every word is in service to that job.

When you take an English degree, you’re writing several 10 page papers a week, and working on longer papers at the same time. This is quite similar to the workload for product managers.

Economic expression of emotion via poetry is the second power tool of the English major. A strictly rational approach to the requirements above is acceptable or even desirable in some contexts, but overall insufficient. 

@brenebrown writes, "We want to believe that we are thinking, rational people and on occasion tangle with emotion, flick it out of the way, and go back to thinking. That is not the truth. The truth is we are emotional beings who on occasion think."

Because a PM must communicate with humans, we need to be able to engage emotions with our language. “Maximizing emotional load of each word through musical awareness” is a rather soulless description of poetry, but it’ll do for function.

Like the mental habits of engineering for scale... these are part of a toolbox that the English degree provides. Reading thousands of pages per week has turned out to be useful in modern life as well.

Sunday, August 12, 2018

Merger & Acquisition Failures

Also available on Twitter.

Sometimes when two companies love each other very much... Companies buy other companies. Maybe it’s to pump marketshare or shut down competition. Sounds like a boring transaction as long as regulators don’t mind. Or maybe it’s to get technology and people.

Those are exciting projects, full of hope and dreams. And yet, so much of the time the technology is shelved and the people quit. Why is that? Because acquisition alters the delicate chemistry of teamwork and product-market-fit.

Maybe the acquired company continues to operate as a wholly own subsidiary and little changes for a long time. Or maybe the acquired company is quickly integrated into the mother ship so that all that goodness can be utilized ASAP.

I’m no expert on corporate acquisition, but I’ve had a front row seat for some of these. A few of them could even be called successful. Let’s generalize heavily from after-hours conversations that clearly have nothing to do with any of my previous employers.

The fateful day has come! Papers are signed, the Tent of Secrecy is taken down, and the press release is out. Acquiring teams are answering questions and testing defenses. They’ve got to retain key team members, integrate technology, and align the sales teams before blood spills.

At the same time, they’ve drawn political attention and are certainly facing some negative buzz. In a really challenging environment, they’re also facing coup attempts. M&A is as hard as launching companies, so it’s easy for others to snipe at.

Meanwhile, acquired teams are all over the emotional map. Excited, sad, suddenly rich, furious at how little they’re getting. Are friends now redundant, immediately or “soon”? Who's reviewing retention plans on a key team members list, and who's not: it won’t be private for long.

After an acquisition one might assume headhunter attention. When better to check in on someone’s emotional state and promise greener grass? Churn commences. The network starts to buzz, people are distracted, and some leave.  Of course, lots stay!

And maybe the folks that stay for retention bonus are a little more conservative. Bird in the hand, part of a bigger safer company, and there’s so much opportunity because everyone else in the big company is beat down and burned out. Sour like old pickles.

It seems that there’s more engineers and salespeople that make it through the acquisition. The acquired executives disappear into other projects or out of the company. Resting and vesting, pivoted into something new, but unlikely they’re still guiding their old team. Who is?

The middle managers who stay all drift up to fill recently vacated executive slots, where they either grow or flame out. Their attention is diffused into new teams and new problems. PMO steps in heavily, since the acquired company didn’t have one. Devs are largely on their own.

Nature abhors a vacuum, and someone steps in to fill this one. With luck they maintain product-market-fit and mesh with internal requirements. Or they fail and introduce interpersonal conflict to boot. Is this is the end of the acquisition road? Or does engineering lead itself?

There’s bugs to fix, and everyone knows what the old company was planning. The acquiring company has lots of new requirements too, like ADA compliance and Asian language support. Who needs customer and market input anyway? After a while the old roadmap is consumed.

There’s layoffs of course, and new requirements keep coming. “Please replace these old frameworks with a more modern workalike.” “Please rescue this customer with a technical fix for their social problems.” “Please do something about our new global vaporware initiative.”

The challenge of doing more with less is sort of fun. There’s some friends left. And the acquired person feels big. They talk with important customers and executives and they can spend more time on their home life. More folks have left, the remaining acquirees are authorities.

But the recruiter calls stopped. A temporary market slowdown, or is it personal? Can they get a job outside of the big company anymore? So they reach out and do a few interviews, pass up on some lower-paying opportunities, get shot down by something cool.

Better take more projects in the big company. By now the tech they came in with has lost its shine. Put a cucumber in brine long enough and it’s just another pickle. They’re helping new engineers with weird tools and pursuing new hobbies.

The street cred of being from an acquisition is gone, and they’re neck deep in big dull projects. “Lipstick this pig until it looks fashionable.” “Squeeze more revenue from the customer base.” “Tie yourself into the latest silly vaporware.”

Or even “Propose an acquisition to enter a new market with.” If this is success, who needs competition? When the game is no fun but you have to keep playing, people will change the rules — and that is whycome politics suck. Good luck out there, but don't stay too safe.

Sunday, July 22, 2018

Tools and the Analyst

also posted as a Twitter thread

Let’s say I’m responsible for a complex system. I might have a lot of titles, but for a big part of my job I’m an analyst of that system. I need tools to help me see into it and change its behavior. As an analyst with a tool, I have some generic use cases the tool needs to meet.

  • Tell me how things are right now
    • What is the state?
    • Is it changing?
  • Tell me how things have been over time?
    • What is the state?
    • Is there any change in progress?
    • Is the state normal?
    • Is the state good/bad/indifferent/unknown?
  • Tell me what I'm supposed to know
    • What is important?
    • What should I mitigate?
    • What can I ignore?
  • Alert me when something needs me
    • What is the problem?
    • What is the impact?
    • Are there any suggested actions?
  • How much can I trust this tool?
    • Do I see outside context changes reflected in it?
    • How does the information it gives me compare with what I see in other tools?
  • How much can I share this tool?
    • Do I understand it well enough to teach it?
    • Can I defend it?
As a generic set of use cases, this is equivalent to the old sysadmin joke, “go away or I will replace you with a small shell script”. A tool that can provide that level of judgement is also capable of doing the analyst’s job. So a lot of tools stop well short of that lofty goal and let the user fill in a great deal.

  • Alert me when a condition is met
  • Tell me how things are right now
  • Tell me how things have been over time?

Maybe the analyst can tie the tool’s output to something else that tries to fill in more meaningful answers, or maybe they just do all of that in their own heads. This is fine at the early adopter end of Geoffrey Moore’s chasm, and many vendors will stare blankly at you if you ask for more.

After all, their customers are using it now! And besides, how could they add intelligence, they don’t know how you want to use their tool? They don’t know your system. But let’s get real, the relationships between customers, vendors, tools, analysts, and systems are not stable.

The system will change, the customer’s goals will change, and the analyst won’t stay with this tool. Even if everything else stays stable, experienced analysts move on to new problems and are replaced by new folks who need to learn.

The result is that tools mature and their user communities shift, growing into mainstream adopters and becoming a norm instead of an outlier. By the time your tool is being introduced to late adopters, it needs to be able to teach a green analyst how to do the job at hand.

How’s that going to work? Here’s a few ideas:

0: ignore the problem. There’s always a cost:benefit analysis to doing something, and nature abhors a vacuum. If a vendor does nothing, perhaps the customer will find it cost-effective to solve the problem instead.
Look at open source software packages aimed into narrow user communities, such as email transfer. Learning to use the tools is a rite of passage to doing the job. This only works because of email-hosting services though.
Because email is generally handled by a 3rd party today, the pool of organizations looking at open source mail transfer agents is self-selected to shops that can take the time to learn the tools.

1: ship with best practices. If the product is aimed at a larger user community, ignoring the problem won't work well. Another approach is to build in expected norms, like the spelling and grammar checkers in modern office suites.
An advanced user will chafe and may turn these features off, but the built-in and automated nature has potential to improve outcomes across the board. That potential is not always realized though, as users can still ignore the tool’s advice.
An outcome of embarrassing typos is one thing, but an outcome of service outage is another. Since there is risk, vendors are incentivized to provide anodyne advice and false-positive prone warnings, which analysts rapidly learn to ignore.

2: invest into a services community and partner ecosystem. No one can teach well as a person who learned first. Some very successful organizations build passionate communities of educators, developers, and deployment engineers.
Organizations with armies of partners have huge reach compared with more narrowly scoped organizations. However, an army marches on its stomach and all these people have to be paid. The overall cost and complexity for a customer goes up in-line with ecosystem size.

3: invest into machine intelligence. If the data has few outside context problems, a machine intelligence approach can help the analyst answer qualitative questions about the data they’re seeing from the system. Normal, abnormal: no prob! Good, bad: maybe.
It takes effort and risk is not eliminated, so it’s best to think of this as a hybrid between the best-practice and services approaches. Consultants need to help with the implementation at any given customer, and the result is a best practice that needs regular re-tuning.

Perhaps we are seeing a reason why most technology vendors don’t last as independent entities very long.