Monday, July 2, 2018

Dev and Test with Customer Data Samples

The short answer is don’t do it. Accepting customer data samples will only lead to sorrow.

  • Not even if they gave it to you on purpose so you can develop for their use case or troubleshoot their problem.
  • The person who wants you to do that may not be fully authorized to give you that data and you may not be fully authorized to keep it. What if both of you had to explain this transaction to your respective VPs? In advance of a press conference?
  • Even if the customer has changed the names, it’s often possible or even easy to triangulate from outside context and reconstruct damaging data. That’s if they did it right, turning Alice into Bob consistently. More likely they wiped all names and replaced them with XXXXXX, in which case you may be lucky, but probably have garbage instead of data.
  • Even if everyone in the transaction today understands everything that is going on… Norton’s Law is going to get you. Someone else will find this sample after you're gone and do a dumb.
  • Instead of taking data directly, work with your customer to produce a safe sample.


At first, you may look at a big data problem as a Volume or Velocity issue, but those are scaling issues that are easily dealt with later. Variety is the hardest part of the equation, so it should be handled first.

Are you working with machine-created logs or human-created documents? 


  1. Find blocks of data that will trigger concern. Since we do not care about recreating a realistic log stream, we can chose to focus only on these events. If we do want the log stream to be realistic for a sales demo, we will need to consider the non-concerning events too, but finding the parts with sensitive data helps you prioritize.

  2. Identify single line events.
    • TIME HOST Service Started Status OK
  3. Identify multi-line transactional events.
    • TIME HOST Received message 123 from Alice to Bob
    • TIME HOST Processed message 123 Status Clean
    • TIME HOST Delivered message 123 to Bob from Alice
  4. Copy the minimum amount of data necessary to produce a trigger into a new file.


  1. Find individual blocks of data that will trigger concern: names, identifiers, health information, financial information.

  2. Find patterns and sequences of concerning data. For instance a PDF of a government form is a recognizable container of data, so the header is sufficient indicator that you’ve got a problem file. A submitted Microsoft Word resume might have any format, though. 

  3. Copy the minimum amount of data necessary to produce a trigger into a new file.


Simply replacing the content of all sensitive data fields with XXXXXX will work for a single event, but it destroys context. Transactions make no sense, interactions between systems make no sense, and it’s impossible to use the data sample for anything but demoware. If you need to produce data for developing or testing a real product, you need transactions and interactions.

  1. In the new files, replace the sensitive data blocks with tokens. 
    1. Use a standard format that can be clearly identified and has a beginning and end, such as ###VARIABLE###.
    2. Be descriptive with your variables: ###USER_EMAIL### or ###PROJECT_NAME### or ###PASSWORD_TEXT### make more sense than ###EMADDR###, ###PJID###, or ###PWD###. Characters are cheap. Getting confused is costly.
    3. Note that you may need to use a sequence number to distinguish multiple actors or attributes. For example, an email transaction has a minimum of two accounts involved, so ###EMAIL_ACCOUNT### is insufficient. ###EMAIL_ACCOUNT_1### and ###EMAIL_ACCOUNT_2### will produce better sample data. 

  2. Use randomly generated text or lorem ipsum to replace non-triggering data as needed.
 Defining "as needed" can seem more art than science, but as a rule of thumb it's less important in logs than documents.


Raw samples as above are now suitable for processing in a tool like EventGen or GoGen. This allows you to safely produce any desired Volume, Velocity, and Variety of realistic events without directly using customer data or creating a triangulation problem.

Sunday, June 24, 2018

From Enterprise to Cloud, Badly

Sometimes enterprise companies try to go cloudy for bad reasons with bad outcomes. I’m going to talk about those instead of well-planned initiatives with good outcomes because it’s more fun.

So you’ve got an enterprise product and you’re facing pressure to go to the Cloud… after all, that’s regular recurrent revenue instead of up-front tranches, and très chic as well! Unfortunately, there are more ways to fail than there are to succeed. Let’s review.

What is the real motivation? Is it to better serve existing customers or to capture new customers? Companies can rationalize an emotional desire to do something with anecdotal data from cherry-picked customers. It is sad to see reality reasserted by subsequent events.

Where is the data suggesting that existing customers need this, and are you really sure that you’ve captured what they want and planned a move that will help them? Have you all considered that you’re changing the compliance picture for both parties?

Maybe the answer is that you don’t care about existing customers and will leave them on existing product while pursuing new customers. Assuming that the product market fit of your current successful company can be recreated in a new market on a new platform shows some hubris.

If it’s for a new market, what is the plan to break into that market? Does your brand help you there at all? Do you have people who can do it? Is there a plan to avoid losing your existing market? Are you building a new team, or adding this job to your existing one?

Selling enterprise solutions into a market that doesn’t already use them is a very tough job, starting with explaining why there is a problem and ending with why you have a solution. Your enterprise company solves a problem. Does this cloud market have that problem too?

Are you planning to continue producing the on-prem variant or going cloud-only? The textbook answer is to build for one and back-port to the other, which is terrible. The resulting products are suboptimal for one or both environments.

Option: build for on-prem first. Your cloud product is customer one for a new set of features you’re adding to on-prem. Fully automated configurability via API (Application Programming Interface) and/or CLI (Command Line Interface) and/or conf files. Fully UX'ed (User Experience) configuration for admin CRUD (Create Read Update Delete) functions. Really solid RBAC (Role Based Access Control) and AAA (Authentication, Authorization, and Access).

The features required by a Cloud deployment are interesting to the largest enterprise customers, and not so important to anyone else. They want those features because they’re operating a service for internal teams to operate. You’re making your product ready for use by an MSP (Managed Service Provider).

In fact, you probably are one now. There’s a hosted variant of your product, and a team operating it, staffed with non-developers. It's a services company inside of a products company. Welcome to the SHATTERDOME.

This sucks. Everything’s a ticket, ops hates your product, your customers hate ops, and the MSP is not as profitable as the pure software side. Worst, your best customers have jumped into it with both feet because you're owning their ops problems now. Suboptimal.

Another option is to select partners to run this MSP business, but that produces so many weird problems. Loss of customer control, coopetition, revenue recognition, channel stuffing, product release planning, blame games at every step of customer acquisition and retention.

A service-partner MSP model also gives competitors and analysts a pain point to focus on and open weird new threat surfaces. But it keeps your software product company’s motivations and responsibilities crystal clear! That clarity may not be as obvious to the customer.

That option sucks, so let’s do cloud first and back-port to on-prem. Now you’re building a work-alike second version of your old product, new code with parallel functionality. Contractually you’ve got to keep the old product going, but the A Team has moved on. Hire accordingly.

Cloud first is great! There's all this IaaS (Infrastructure as a Service) stuff you can use! Which doesn't exist on-prem, and differs between platforms. That's okay, there's all these abstraction layers! Which add expense and hurt performance. Do this enough and you're shipping the old product in a VM (Virtual Machine).

Wait a minute. The goals of cloudy micro-service engineering are to enable cost-optimization via component standardization, load-shifting, and multi-tenancy. So we're not going to ship our Cloud product until it does those! Good plan.

Functions are costing money when they’re used, customers are getting what they need, done. There’s a lot more code and companies between your functionality and the hardware though, and that means more opportunities for failure. Say hi to Murphy.

Object-oriented programming versus functional programming comes to mind. Every micro-service is a black box with APIs describing how it fits into the larger system. Harder to troubleshoot, easier to scale, costly to design and maintain. The right tool for some jobs.

For instance, it’s brilliant for sharing resources between low utilization customers. It’s fabulous for ensuring that AAA and RBAC are designed correctly at the foundation. It’s just right for scaling dependent processes with uneven requirements.

Does that means it’s better for streaming or batching? YMMV (Your Mileage May Vary). What problems can it solve? Anything, given sufficient time and effort. Just like OOP vs FP, the engineering isn’t very important as long as it’s in service to a business need.

Multi-tenancy assumes large numbers of low-interaction customers. Someone who doesn’t use your product heavily and will self-service their own management needs is a perfect fit, and if you have lots of those someones you can make a profit.

If that isn’t what your business looks like, it’s silly to hope new customers will appear fast enough for the old business. If you customers are migrating to cloud for SaaS, then you do want this new Cloudy product so you can follow them. Otherwise, why would you do it.

36 person-months later… “The new product is misaligned with our customers!” “Fix sales and marketing?” “We could go back to the drawing board.” “If only we had the same architecture and customer base as a company with a different product market fit!”

Disrupting your own product before someone else does is fine, unless you're doing it too early. It might make more sense to just invest in some one else’s company and revisit this idea later.

Monday, June 4, 2018

It's not a platform without partners

What are the major decisions that a platform needs to make in order to balance incentivizing development vs. maintaining quality and control over their 3rd party app marketplace?

Let's look at this on three scales, in which the right answer for a given team is somewhere between two unrealistic and absolutist extremes.

First decision scale: Allow freeform development or provide a limited toolkit.

A lot of platform vendors assume that everyone building things for them is a developer, because they developed something. These vendors plan for developer support that they can't build, or build stuff that goes unused.

If the people solving problems on a platform are making a living by selling software that they wrote, they're developers. The platform should not proscribe their toolchain choices. They need a freeform environment that lets them do anything, and they don't want safety lines.

They need thorough documentation far more than they need anything else. Seriously, just direct resources to development and tech writing.

If the people solving problems on a platform are making a living by selling or using software that someone else wrote, they are not developers. Call them consultants, integrators, PS, admins, engineers, or architects instead.

Consultants who develop have different needs than developers who consult. They may want to teach their customers to fish, or they may want to be fishmongers, but they're not often trying to create new seafood concepts.

They want an easy way to connect components together to increase value. They want an easy and popular language with lots of community support, libraries of common functions, and simple guardrails that keep things safe and reasonable.

Second decision scale: Allow content dependencies and component reuse or force monolithic apps.

The chaos of extensibility, DLL Hell, a rich ecosystem of shared responsibility and global namespace? A platform that enables connectivity and dependency opens the door to expansion, competition, and growth, at the cost of instability.

The control of stability, bloated monoliths, a statically linked walled garden of singleton solutions? A platform that encourages safety and stability is easy to depend on, at the cost of expensive, repetitive efforts to reach limited solutions.

To consider this difference with more realism and less extremism, compare the Win32 ecosystem of the aughts with the IOS ecosystem of the teens (and note that the former added monolithic containers while the latter added sharing interfaces).

Third decision scale: Allow partners to write closed source or force them to be open source.

The verbs in that are not accidental. A platform has to offer a least some support for closed source, by keeping the source away from the user. Perhaps it goes farther and supports licensing, or brokers purchase for the partners, or not.

Of course, a partner can always choose to post their source code. If the platform only supports an scripting language and the user can just read the JS or PY files, then the partner doesn't have a choice: it's open source.

This scale decides if the possible business models are based on selling software, or selling services. Another way to say that: partners in the ecosystem can grow exponentially at high risk, or linearly at lower risk.

I've matrixed these scales and I don't have great examples for all of the possible combinations, but I do have a suspicion that going above the Bill Gates Line needs freeform development... more thought on that later.

Monday, May 7, 2018

GDPR is great for Facebook and Google

Copying from Twitter to Blogger like carving a wheel from a rock

GDPR is going to be great for Facebook and Google.

"Over time, all data approaches deleted, or public." -- Norton's Law. See by Pinboard and by AndrewYNg for more background and viewpoints.

Picture two types of data store, public and private. If your store is private, you can use it to advantage as suggested by Andrew Ng's talk. If your store is public, everyone can use it and advantage is created in other ways.

Say a company wants to live dangerously and creates a private store of personally identifiable information about people. Say that company suffers a cybersecurity incident and the store of data becomes public. Say that there is no long term negative impact on that company.

Say that this pattern happens over and over again for many years. A sensible executive might infer that it's not so dangerous to live dangerously.

Of course I'm talking about credit card number storage in the early days of web retail, not anything modern :) For all of its issues, PCI changed the picture of credit card storage by putting real financial penalties on the problem.

Now companies either perform the minimum due diligence, with separate cardholder data environments and regular audits, or they outsource card-handling to another company that is focused on this problem.

Putting more and more credit card data into a single store obviously creates a watering hole problem, but it also allows focusing protective efforts. Overall it's a net good. Until that third party hits a rough patch, but entropy is what it is.

Since GDPR has the same impact on a broader set of personal data, it seems likely that the same outcome will eventually occur. Either protect the data yourself, or outsource the problem to a broker.

The broker needs to provide analytics tools so you can do all the market and product research you wanted the data for. It would also be handy if they'd take care of AAA, minimizing the impact of change (name, address, legal status, &c).

And who's in a great position to do all those things already? Google and Facebook.

Monday, April 16, 2018

Crash notifier

Say you're on OSX and working with some C++ software that might crash when bugs are found... and say you don't want to tail or search the voluminous logs just to know whether there's crashes to investigate. The crashes will show up in the Console application, but it destroys your performance to leave it open.

The programs I care about all have names starting with XM. This one-liner pops a notification and then opens the crash report in Console, which I can close when I'm done.

Jacks-MacBook-Pro:SCM-Framework jackcoates$ crontab -l | grep Crash
*/60 * * * * find ~/Library/Logs/DiagnosticReports -cmin -60 -name xm* -exec osascript -e 'display notification "{}" with title "Extremely Crashy!"' \; -exec open {} \;

Thursday, April 12, 2018

Where's the Product in AI?

AI & ML products are harder than they look

Copying from Twitter to Blogger like dabbing paint on a cave wall

AI tech is obviously overhyped, and conflating with ideas from science fiction and religion. I prefer using terms like Machine Intelligence and Cognitive Computing, just to avoid the noise. But if we strip away the most unrealistic stuff, there's some interesting paths forward.

The biggest problem is in defining strong semantic paths from the available data to valid use cases. Many approaches founder on assumptions that the data contains value, that the use case can be solved with the data, or that producer and consumer of data use terms the same way.

Given a strong data system, there is a near term opportunity to build AI-powered toolsets that help customers learn and use the data systems that are available. This is a services heavy business with tight integration to data collection and storage.

This has to be human intelligence driven and therefore services-heavy though, because the data and use cases are not similar between budget-owning organizations. There is data system similarity on low-value stories, but high-value stuff is specific to an organization.

That services work should lead to the real opportunity for cognitive computing, which is augmenting human intelligence in narrow fields. If there is room to abstract the data system, there's room to normalize customers to a tool. Then you've got a product plan, similar to SIEMs.

Put products into fields where the data exists, use cases are clear, the past predicts the future, pattern matching and cohort grouping are effective, the problem has enough value in it to justify effort, and outside context problems don't completely derail your model. Simple!

If you can describe the world in numbers without losing important context, then I can express complex relationships between the numbers.

There's a question being begged though... given a data system that successfully models, how much did the advanced system improve over a simpler approach? Is the DNN just figuring out that 95%-ile outliers are interesting?

If a problem can be solved with machine intelligence, great. If the same problem could be solved with basic statistics, that's cheaper to build, operate, and maintain. It'll be interesting to see how this all shakes out.

Update: an interesting take on this from Benedict Evans:

Wednesday, April 11, 2018

Splunk Apps and Add-ons

What are Splunk Apps and Add-ons? What's the difference?

If you're still confused... it's not just you. The confusion roots back to fundamental disagreements on approach that are encoded into every product the company has ever shipped, so it's tough to recommend a meaningful change.

Splunk apps are folders in $SPLUNK_HOME/etc/apps. They're containers that you put splunk objects into. You can put anything in them: code, knowledge management configuration, dashboard elements, libraries, binaries, images, whatever. If you just want to put some stuff together and run it on your laptop, you're done at this point. Put things in a folder for organization. Or don't. Whatever.

If you want to distribute components in a large environment, if you want to depend on shared components, if you want to avoid huge multi-function monoliths, then you start dividing apps into different types. This is why you see the terms "App" and "Add-on" in Splunk. The App refers to the visible front-end app that a user will interact with. The Add-on refers to administrator-only components. This is where the Splexicon definitions start and stop.

There are multiple types of Add-ons. Their definitions are not entirely well established, and have come and gone in official documentation. Right now, it's here, but don't be surprised if that breaks:

Since I helped to write these definitions in the first place, I feel confident in stating what they should be. However, these rules are breached as often as they are observed, and Splunk themselves are the most likely to ignore all of this guidance. If you want to follow the best possible practice, buy Kyle Smith's book and read that. Here are the possible types:

IA: Input Add-on

This includes and configures data collection inputs only. In practice, these are rare and the functionality is usually stuffed into a TA.

TA: Technology Add-on

This includes and configures knowledge management objects. In practice, many TA's also include data collection inputs. A TA would be able to translate the field names provided by a vendor to field names expected by your users, as well as recognizing and tagging specific event types.

SA: Supporting Add-on

This includes supporting libraries and searches needed to manage a class of data. Let's say we're building a security monitor and considering whether authentication attempts seem malicious or not. An SA could include lookup and summary generators to normalize and aggregate the data from many authentication systems and ask generic questions for reporting and alerting.

  • Example: 
  • Goes on Search Heads
  • You should absolutely have savedsearches.conf
  • It would make sense to include lookups and some dashboards, prebuilt panels, modular alerts, modular visualizations
  • Some SA's include all the IA stuff mentioned above.

DA: Domain Add-on

This includes supporting libraries and searches needed to manage a domain of problems. Let's say we're considering PCI requirement 4, focused on antivirus software being present, configured, and not reporting infections. A DA might include lookup and summary generators to prepare those answers, dashboards to investigate further, and correlation searches to alert on problems.
  • Example: (the "dirty dozen" PCI requirements that can be measured from machine data are each represented with a DA)
  • Goes on Search Heads
  • You should absolutely include dashboards, prebuilt panels, modular alerts, modular visualizations
  • It would make sense to include lookups and savedsearches.conf
And so finally, the App.


The front end that ties it all together and makes it usable. If it's done well, users have no idea everything before this was ever involved. This goes on search heads only.