Tuesday, July 19, 2016

TechEmpower Benchmarks and the Microsoft ASP.NET Core 1.0 Performance Story

I’ve had lots of conversations with fellow CTOs about the TechEmpower Web Framework Benchmarks.  Some really appreciate the value that they bring to help them understand performance characteristics of different frameworks.  Depending on the Technical Performance Requirements for your system, this could be really valuable information that is part of your framework selection process.  However, I’ve also had fellow CTOs tell me that they don’t find the test credible or that they don’t understand how their favorite framework doesn’t perform better.   Frankly, those two statements are often correlated. But when Microsoft is talking about “huddling around the benchmark” and “only making a pull request when it’s an order of magnitude of Node.js” – I would say that the benchmarks are providing real value to the development community.

Let me step back and tell a bit more of the story here.

You may or may not be aware that Microsoft just announced the release of ASP.NET Core 1.0: 
Today we are excited to announce the release of ASP.NET Core 1.0! This new release is one of the most significant architectural updates we’ve done to ASP.NET.  As part of this release we are making ASP.NET leaner, more modular, cross-platform, and cloud optimized.  ASP.NET Core is now available, and you can start using it today by downloading it here.
There’s a lot to like about ASP.NET Core 1.0.  It is a viable contender for all sorts of development efforts.  One of the things that makes us even more excited is that Microsoft has focused on is Performance as a core attribute:
With a significant rewrite of the web framework, we addressed some performance issues and have set aggressive goals for the future.  We’re introducing the new Kestrel web server that runs within your IIS host or behind another host process.  Kestrel has been designed from the start to be the fastest .NET server available, and our engineers have recorded some benchmarks to prove it.  With the backdrop of the standard TechEmpower Benchmarks, the team used these same tests to validate the speed of Kestrel and have some impressive numbers to report.
Another announcement also touts the benchmarks:
We used industry benchmarks for web platforms on Linux as part of the release, including the TechEmpower Benchmarks. We’ve been sharing our findings as demonstrated in our own labs, starting several months ago.
How did Microsoft get to this kind of performance.  Scott Hunter, Director of Program Management on the App Plat team at Microsoft, tells a bit of the story on the DotNet Rocks Podcast (starting around 19:00). 
I got a rash of customers who said to me, “Hey, we went to this TechEmpower Benchmark site.  And we saw where .Net was and where other technologies are, why should we be using your stack?”

Damian said – “I’m going to build perf lab and take a look at this thing.”

In the team room, the team would be huddled around the benchmark saying, “We got another 10,000 or 50,000 or 70,000.”

would only make a pull request when it was an order of magnitude of Node.js.  If I can get 2 Nodes, then I’ll do a PR. 

It became this thing in the team room where people kept piling in, and it became important.  Then as we started putting the numbers out there, the response was crazy.  The pinnacle of the responses was … Satya [Microsoft’s CEO] got an e-mail from somebody in the Valley, which we ended up seeing at some point. The person was basically saying, “Hey, I just want to let you know that non-Microsoft and non-DotNet people down here are actually looking at the numbers that one of your teams is doing and we find them super-exciting. He said there’s chatter on Slack channels and stuff from people who not be even thinking or talking about us.”
The Damian mentions is Damian Edwards.  You can see a talk he does on Vimeo also telling a bit of this story.


I have to mention that early in the video Damian asks:
Who’s heard of TechEmpower – okay most people. 
Wow, really - who's that audience?

Damian takes us through how they looked at the Benchmarks and what led them to achieving some remarkable results posted on their intro page:



This is exactly the kind of thing that we were hoping at TechEmpower when we came up with the benchmarks.  The fact that Microsoft made it a focus and applied resource to produce such exceptional performance is commendable, and the result is a solution that is provides tremendous value to ourselves and the developer community more broadly. 

Great job Microsoft! 

At TechEmpower, we are very happy to have been part of your journey. 

Wednesday, February 10, 2016

What are the Technical Performance Requirements for your Startup?

By far the most popular post on this blog is 32 Questions Developers May Have Forgot to Ask a Startup Founder.  It was originally written in 2011 and has had amazing staying power.  While I’ve updated it a few times, it continues to get at important questions that startup founders need to be asking.  I find myself sending it to startup founders all the time – maybe just slightly less than Free Startup CTO Consulting.

One notable gap in the 32 Questions post is Performance.  Luckily, some of the folks at TechEmpower just posted Think about Performance Before Building a Web Application.  It does a good job of laying out different aspects of performance that should be thought about prior to creating a system.

I want to take a slightly different cut at the topic of performance.  While it’s a messy topic, I’m going to try to lay out some of the additional questions that developers should be asking a Startup Founder around the performance requirements of the application.

To get us started and to grossly oversimplify performance, conceptually we can think about the system as consisting of the following elements that I’ll refer to throughout the post.
  • Requests.  We get a set of requests for our system to do something – generally from users or external systems.
  • Compute.  Our system must access our data, possibly 3rd party services, do some calculation and then get back to the user or the other system with our response.
  • Response. The pages or API response we provide back.

Application Characteristics

Any discussion of performance starts by understanding what the software does, how it is used, and what it interacts with.  A developer should find out:
  • What are the different types of users?  What do they do?
  • Is SEO important? – this is really another type of user – a crawler
  • Are we providing an API to other systems?  What are the characteristics of how these are used?  - again, this is like a user type.
  • Are there any time-based operations?  Overnight calculations?
  • Are 3rd party services used?  What are the characteristics of how they are used?  What are their performance characteristics?
  • How many of each type of user are there?  How many might be using the application at the same time (concurrent users)?  Will there be spikes of concurrent usage?
  • What data is used in the application?  How big is the data set?  Are there complex aspects to the data?
  • What computations / algorithms are part of the application?  Are any of the calculations done often?  Are any of the calculations complex?
  • Are there any aspects that have specific performance needs?  For example, are you providing a stats service that needs frequent, fast updates?

Response Time

Once we understand the overall characteristics of the application, then we want to drill down on some specific performance characteristics.  We generally start with response time needs because, in many ways, this is ultimately the measure of performance.  If you think about our system picture above, response time is roughly the time it takes to get our page or API call back from the system. 

It’s well documented that response time has significant business impact:
The impact is quite real.  But as with most things in tech, the picture is far more complicated than that.  Consider two different types of systems:
  • eCommerce or Content web site.  These will have many individual web pages, with specific URLs, optimized for SEO.  Each page needs fast response time (both time to first byte and total load time).  Pages may not have much dynamic content on the page. There may be lots of pages.
  • Web Application such as Web Mail or a gated Social network.  The content is not used for SEO so response time characteristics may be quite different.  If the initial load time of the web application was 10 seconds but bringing up an individual email took less than 1s that’s likely an okay characteristic.  Technically, this may open the door to a single-page application (SPA).  These often often have a relatively longer time to load and then has really good performance once you are “in the application.”
Of course, response time is quite a bit more complex than this.  You will be looking at aspects like:
  • Time to first byte (TTFB) vs. Load time
  • API calls
  • Global delivery?
  • Mobile delivery possibly with slow connections?
As a startup founder, you need to think about the characteristics of your solution and what you need from a response time standpoint.

Request Volume

Assuming we know what our system needs to produce (the right side of the picture) and how fast (response time), then the next big question is really how much?  We want to find out what requests the application gets (left side of picture) and how often these come in.  This is generally turned into a Requests per Second number.

Most of the time we will start by asking about Concurrent Users – and this is generally the number that startup founders are thinking about when they talk about scalability.  Concurrent users are the number that are on your web site or web application at the same time.   Of course we need to combine number of concurrent users with what the users are doing in order to have more of a picture of what this means. 

For example, let’s assume this is a content site.  For human users, they request a page with content, likely the content page is relatively simple, the user reads/scans the page for a little bit, they decide to click something else which requests a new page.  This may take 10 seconds.  So some quick math:
  • Each user generates 0.1 requests per second
  • 1,000 concurrent users generate 100 requests per second
Those are really interesting numbers for a technical person.  Of course, this gets much more complicated.  A developer will want to drill down on:
  • Different types of users?
  • Different use cases?
  • Traffic spikes?  TV Coverage?  Real-time events?
  • API Usage?
  • Growth rates?
This will give us a clearer picture of Request Volume for different kinds of requests that our system needs to process.

Complexity

Now we know the volume we need to satisfy coming in on the left and the response time required on the right.  The middle is what the system needs to do in order to respond to that volume of requests within that timeframe. 

Developers will want to explore with a startup founder where there may be complexity in the system.  We want to do this for two reasons: (1) how complex is the software we need to build – complexity generally means more time/cost to build, and (2) how long will it take for the system to calculate responses.  I’m only going to focus on the second aspect – understanding complexity as it relates to performance.  And really I’m only going to scratch the surface here as complexity is something that a startup founder and a Technical Advisor would need to explore together.
  • Computation Complexity – What do we need to compute?  What are some of the more complex aspects of system calculation?  Natural language processing?  Matching algorithms?  Complex reports?  Are there widely varied use cases with different performance characteristics?  Any blocking operations?
  • Data Complexity – What data are we dealing with? How big is the data set?  What are the largest number of a single type of entity?  Are there aspects that need to be pre-computed?  Any time series data?  Any logging/auditing data?
  • 3rd Party System Complexity – What are the characteristics of the 3rd party systems?  What will happen when they are slow or non-responsive?  What happens when they return poor quality results? 

Last Thoughts


Yikes, that turned into a lot more than I was originally thinking as I started this post.  Hopefully the core model makes sense.  As a startup founder you need to think about the characteristics of your application and then think about the Volume, Complexity and Response Time requirements.  For some applications, it will be relatively straight forward to think about the technical performance requirements for your startup.  However, in many cases, this is a place where you really should be talking with a technical advisor or reaching out to get Free Startup CTO Consulting in order to understand what you need.