0 Comments Posted in:

In this post, I want to show how simple the CSS grid makes it to create a basic table layout with control over the column sizes.

In this simple example, I want my table to have two fairly small columns, with a third column being given the bulk of the remaining space.

You can see the result here:

See the Pen Grid Table by Mark Heath (@markheath) on CodePen.

The great thing about this is that thanks to CSS grid (which is available in a healthy majority of browsers now), you can achieve this whole thing with just one simple CSS rule:

.table {
    display: grid;
    grid-template-columns: 1fr 1fr 80%; 
    max-width: 480px;

We need to set display to grid (and I've also specified a max-width for my table). Then grid-template-columns is where the intelligent sizing comes in. I'm setting the third column to take 80 percent of the space, and then using fractional units to say that the other two columns should be the same size as each other.

As you can see from the embedded example, this means that if one of the short columns has unexpectedly long content, it doesn't cause the layout of the whole table to be reconfigured. You can also set column widths to auto and it's also worth experimenting with this to see the different results you can achieve.

And that's all there is to it. I added a couple of additional styling properties using the handy nth-child rule to give a different background colour to each column.

But finally after many years of waiting it finally seems easy to create tables in CSS without using the <table> HTML element.

Of course, I should mention that CSS is certainly not my speciality, so feel free to let me know in the comments what the correct/better way to do table layouts in CSS is!

0 Comments Posted in:

I think its fair to say that "microservices" has established itself as the leading way to architect a modern distributed cloud-native application. I've discussed many of the advantages of this approach over "monolithic" architectures in my Pluralsight courses such as Microservices Fundamentals

But it's also well known that microservices bring a lot of challenges with them. How do you perform service discovery? How do you enable developers to easily work with the services locally? How do you implement upgrades while minimizing downtime? How do you effectively monitor everything in a centralized location?

Fortunately, there are multiple tools and frameworks designed to help overcome these challenges. Kubernetes is an excellent orchestration platform that helps us immensely with challenges like deployments, observability, and service discovery. Frameworks like ASP.NET Core comes with a whole host of practical features ready to use out of the box like configuration, logging, and health endpoints. Cloud providers like Azure provide a wide variety of PaaS services that can easily be plugged into a microservices application.

So, in theory it ought to be really easy to develop, test, maintain and deploy microservice applications? Well, not so fast. Much progress has been made, but there is a way to go. And projects like Dapr seek to improve things...

What is Dapr?

At first glance Dapr might seem fairly unimpressive. It offers a collection of "building blocks" that solve several challenges relating to building microservices. These building blocks include service to service invocation, pub sub messaging, state management, observability, and secret management.

But don't we already have solutions to all of these? Anyone who's built a microservices application has already had to deal with all those problems, and the tools and frameworks we've already mentioned go a long way to easing the pain.

However, I do think Dapr offers something unique. To illustrate, I'm just going to pick one of the building blocks - service to service invocation, to highlight how Dapr can provide added value on top of what you are already using.

An Example: Service to Service Invocation

When one microservice needs to call another, several things need to happen.

First, we need service discovery - to find the address of the service we're communicating with. Of course, Kubernetes makes this pretty painless with inbuilt DNS. But it's not uncommon for developers to run microservices locally on their development machines. In which case each microservice is at localhost on a specific port number, which requires you to have some alternative mechanism in place to point to the correct service when running locally. With Dapr, you can address the target service by name regardless of whether you're running in "self-hosted" mode (directly on your machine) or on Kubernetes.

Second, when communicating between microservices it's important to retry if there are transient network issues. Of course this is possible to implement yourself with libraries like Polly, but that requires everyone to remember to use it and only recently I found a bug in a microservice caused by forgetting to implement retries. With Dapr, this is just a built-in capability.

Third, it's very important that communication between microservices is secured. Communications should be encrypted using mTLS, and authentication should be used to validate that the caller is authorized. A widely recognized best practice is to use mutual TLS, but this can be a pain to configure correctly, and often gets in the way when you are running locally in development. With Dapr, all service to service communications are automatically encrypted for you with mTLS, and certificates are automatically cycled. This takes a huge headache away.

Fourth, it's very valuable if you have distributed tracing and metrics gathering to give you a way to understand the communications between your microservices. Azure offers this with Application Insights, but again you don't necessarily benefit from that if you are running locally, and I've had problems in the past getting it correctly configured on all services. With Dapr, observability is another built-in part of the runtime. It uses open standards such as OpenTelemetry and W3C tracing making it easy to integrate with existing tools.

Fifth, another aspect of security is governing which microservices are allowed to call each other. For example microservice A might be allowed to talk to microservice B but not vice versa. It can be a pain to roll your own framework for configuring something like this, and if you're not a security expert its easy to get it wrong. Service meshes can offer this kind of behaviour for a Kubernetes cluster. Dapr also can provide the same access restrictions by means of access control lists, which are easy to configure, and even work when your running in "self-hosted" mode rather than Kubernetes.

Finally, we're seeing the rise of gRPC as an alternative to HTTP based APIs for microservices, due to its higher performance, and more formalized contracts. Migrating from HTTP to gRPC in a microservices environment could be tricky, as you'd need to upgrade clients and servers at the same time, or provide a period where both protocols were exposed. Dapr again can help us with this - allowing gRPC or HTTP to be used for service to service invocation, and even allowing a HTTP caller to consume a gRPC service.

So as you can see, there's quite a lot to the "simple" task of service invocation, and Dapr gives you a very comprehensive solution out of the box. It's not perfect - I've ran into some issues where VPN settings on a corporate network interfere with Dapr's service to service invocation in self-hosted mode. But it has the potential to greatly simplify this aspect of microservice development.

Dive deeper into Dapr

Of course, we've only scratched the surface of what Dapr offers by focusing on a single "building block". We could do the same for the other building blocks. I'm very interested to keep following Dapr and seeing how it evolves. It is already a very rich and capable platform, and it's very easy to adopt incrementally if you don't want to embrace everything at once. I recommend checking out the free Dapr for .NET Developers book which is a great introduction if you're a .NET developer.

0 Comments Posted in:

"Serverless" architecture is one of the most exciting innovations to emerge in the cloud computing space in recent years. It offers several significant benefits including rapid development, automatic scaling and a cost-effective pricing model. Regular readers of my blog will know that I have been (and still am) an enthusiastic proponent of Azure Functions.

But "serverless" does entail some trade-offs. For every benefit of "serverless" there are some corresponding limitations, which may be enough to put some people off adopting it altogether. And it also can seem to be at odds with "containerized" approach to architecture, with Kubernetes having very much established itself as the premier approach to hosting cloud native applications.

I think the next stage of maturity for "serverless" is for the up-front decision of whether to use a "serverless" architecture or not to go away, and be replaced by a kind of "sliding scale", where the decision of whether to run "serverless" or not is a deploy-time decision rather than being baked in up front.

To explain what I mean, let's look at five key benefits of serverless, and how in some circumstances, they introduce limitations that we want to get around. And we'll see that we're already close to a situation where a "sliding scale" allows us to make our application more or less serverless depending on our needs.

Servers abstracted away

The first major selling point of "serverless" is that servers are abstracted away. I don't need to manage them, patch them, or even think about them. I just provide my application code and let the cloud provider worry about where it runs. This is great until I actually do care for some reason about the hardware my application is running on. Maybe I need to specify the amount of RAM, or require a GPU or an SSD. Maybe for security reasons I want to be certain that my code is not running on shared compute with other resources.

Azure Functions is already a great example of the flexibility we can have in this area. It's multiple "hosting plans" allow you to choose between a truly serverless "consumption" plan where you have minimal control of the hardware your functions are running on, all the way up to "premium" plan with dedicated servers, or containerizing your Function App and running it on hardware of your choice.

Automatic scale in and scale out

A second major attraction of serverless is that I don't need to worry about scaling in and scaling out. The platform itself detects heavy load and automatically provisions additional compute resource. This is great until I need to eliminate "cold starts" caused by scaling to zero, or need to have more fine-grained control over the maximum number of instances I want to scale out to, or want to throttle the speed of scaling in and out.

Again, we're seeing with serverless platforms an increased level of flexibility over scaling. With Azure Functions, the Premium plan allows you to keep a minimum number of instances on standby, and you can even take complete control over scaling yourself by hosting your Functions on Kubernetes and using KEDA to manage scaling.

Consumption based billing

A third key benefit of serverless is only paying for what you use. This can be particularly attractive to startups or when you have dev/test/demo deployments of your application that sit idle for much of the time. However, the consumption-based pricing model isn't necessarily the best fit for all scenarios. Some companies prefer a predictable monthly spend, and also want to ensure costs are capped (avoiding "denial of wallet" attacks). Also many cloud providers such as Azure can offer significantly reduced "reserved instance" pricing, which can make a lot of sense for a major application that has very high compute requirements.

Once again, Azure Functions sets a good example for how we can have a sliding scale. The "consumption" hosting plan is a fully serverless pricing model, whilst you can also host on a regular ("dedicated") App Service plan to get fixed and predictable monthly costs, with the "premium" plan offering a "best of both worlds" compromise between the two. And of course the fact that you can host on Kubernetes gives you even more options for controlling costs, and benefitting from reserved instance pricing.

Binding-based programming model

Another advantage associated with serverless programming models is the way that they offer very simple integrations to a variety of external systems. In Azure Functions, "bindings and triggers" greatly reduce the boilerplate code required to interact with messaging systems like Azure Service Bus, or reading and writing to Blob Storage or Cosmos DB.

But this raises some questions. Can I benefit from this programming model even if I don't want to use a serverless hosting model? And can I benefit from serverless hosting without needing to adopt a specific programming model like Azure Functions?

The answer to both questions is yes. I can run Azure Functions in a container, allowing me to benefit from its bindings without needing to host it on a serverless platform. And we are increasingly seeing "serverless" ways to host containerized workloads (for example Azure Container Instances or using Virtual Nodes on an AKS cluster). This means that if I prefer to use ASP.NET Core which isn't inherently a serverless coding model, or even if I have a legacy application that I can containerize, I can still host it on a serverless platform.

As a side note, one of the benefits of the relatively new "Dapr" distributed application runtime is the way that it makes Azure Functions-like bindings easily accessible to applications written in any language. This allows you to start buying into some "serverless" benefits from an existing application written in any framework.

Serverless databases

In serverless architectures, you typically prefer a PaaS database, rather than hosting it yourself. Azure comes with a rich choice of hosted databases including Azure SQL Database and Azure Cosmos DB. What we've also seen in recent years is a "serverless" pricing model coming to these databases, so that rather than a more traditional pricing model of paying a fixed amount for a pre-provisioned amount of database compute resource, you pay for the amount of compute you actually need, with the database capacity automatically scaling up or down as needed.

Of course this comes with many of the same trade-offs we discussed for scaling our compute resources. If your database scales to zero you have a potential cold start problem. And costs could be wildly unpredictable, especially if a bug in your software resulted in a huge query load. Again, the nice thing is that you don't have to choose up front. You could deploy dev/test instances of your application with serverless databases to minimise the costs given that they may be idle much of the time, but for your production deployment you choose to pre-provision sufficient capacity for expected loads, maybe allowing some scaling but within a much more carefully constrained minimum and maximum level.


"Serverless" does not have to be an "all-in" decision. It doesn't even need to be an "up front" decision anymore. Increasingly you can simply write code using the programming models of your choice, and decide at deployment time to what extent you want to take advantage of serverless pricing and scaling capabilities.

Want to learn more about how easy it is to get up and running with Azure Functions? Be sure to check out my Pluralsight courses Azure Functions Fundamentals and Microsoft Azure Developer: Create Serverless Functions