Scaling features

Now we'll turn our attention to scaling the features we implement in our software. The users are the ultimate influence, and now that we have a rough idea of what's required in terms of scaling them, we can put this knowledge to work with feature development. When we think about scaling users, we're thinking about the why. Why do we choose this business model over that business model? Why do we need to enable things for one user role, and disable them for others? Once we get into actually designing and implementing the feature in JavaScript, we start thinking of the how. Not only are we concerned about correctness, but also scalability. As with users, influencers are the determinant when it comes to scalable features.

Application value

We'd like to think that we're doing a good job with the features we implement, and that with each new feature we introduce, we're providing value to the user. It's worthwhile for us to think about this, because in essence, that's what we're trying to do—scale the value of our software to a broader audience. An example of not scaling, in this regard, is when existing users who rely on existing features are neglected, and feel disappointed with our software because we've focused on scaling new areas.

This happens when we forget about the problems we had originally set out to solve with our software. It might sound like a ridiculous notion, but it's easy to move in a completely different direction based on a number of factors. In some rare cases, this change in direction has led to some of the most successful software the world has seen. In the more common case, it leads to failed software, and it is indeed a scaling problem. There's a core set of value propositions our software should always deliver—this is the essence of our software and should never falter. We're often faced with other scaling influencers, like the addition of new customers who want different things from the core values offered by our software. The inability to handle this means we're not able to scale the main value proposition of our application.

An indicator that we're headed down the wrong path when it comes to scaling value is confusion with current value and ideal value. That is, what our software currently does versus what we might like it to do someday. We have to be forward thinking, there's no doubt about that. But future plans need to be continuously sanity-checked against what's possible. And this often means backtracking to why we set out to create the software in the first place.

If our application is really compelling, and we hope that it is, then we have to fight against other scaling influencers to keep it that way. Maybe this means that part of our process for evaluating new features involves ensuring the feature in some way contributes to the core value proposition features of our software. Not all features under consideration will be able to, and these deserve the most scrutiny. Is it really worth the change in direction, and jeopardy to our ability to scale?

Killer features versus features that kill

We want our application to stand out from the crowd. It'd be nice if there were a niche-enough market where we had little to no competition. Then it would be easy to implement stable software that just works, without anything fancy, and everyone would be happy. Given that this isn't reality, we have to differentiate—one such way to do this is by implementing a killer feature—which is an aspect of our software that nobody else has, and something users care deeply for.

The challenge is that killer features are rarely planned. Instead, they're a side-effect of something else going well in the delivery of our application. As we continuously mature our application, refining and tweaking features, we'll stumble upon that one "minor" change that evolves into a killer feature. It's no surprise that this is often the way killer features come into being. By listening to our customers and meeting scaling requirements, we're able to evolve our features. We add new features, take some away, and modify existing features. If we do that successfully for long enough, the killer features will reveal themselves.

Sometimes it's clear during the planning of a given feature that it's trying to be a killer feature, for the sake of being a killer feature. That's not optimal. Nor is it valuable to the user. They didn't choose our software because we had "lots of killer features" on our product roadmap. They chose us because we do something they need done. Possibly more efficiently than the alternatives. As we start thinking about killer features for their own sake, we start gravitating away from the core values of our application.

The best solution to this problem is an open environment, one that welcomes input from all team members at feature inception time. The earlier we're able to kill a bad idea, the more time we will save by not working on it. It's not always as clear-cut as this, unfortunately, and we have to do some development on the feature in order to discover that one or more aspects don't scale well. This could be for any number of reasons, but it's not a total loss. If we're still willing to pull the plug on a feature after development has commenced, then we can learn a valuable lesson.

When things don't scale and we decide to terminate the feature, we'll be doing our software a favor. We're not compromising our architecture by forcing something on it that doesn't work. We'll reach a point during the development of any feature where we'll need to ask ourselves; "do we value this feature more than the architecture we have in place, and if so, are we willing to change the architecture to accommodate it?" Most of the time, our architecture is more valuable than the feature. So putting a stop to developing something that doesn't fit can serve as a valuable lesson. In the future, we'll have a better idea of what will scale and what won't, based on this cancelled feature.

Data-driven features

It's one thing to have an application with a large and varied user base. It's another to be able to make use of the ways they interact with our software by collecting data. User metrics are a powerful tool for collecting information pertinent to making decisions about our software, and the future direction it takes. We'll call these data-driven features.

In the beginning, when we have few or no users, we obviously can't collect user metrics. We'll have to rely on other information, such as the collective wisdom of our team. We've all likely worked on JavaScript projects in the past, so we have enough of a rough idea to get the product off the ground. Once there, we need tools in place to better support our decisions on features. In particular, which features we need versus those that we do not? As our software matures, and we collect more user metrics, we can further refine our features to match the reality of what our users need.

Having the necessary data to make a feature data-driven is a challenging feat to scale, because we need the mechanism to collect and refine the data in the first place. This requires development effort that we simply may not have. Additionally, we have to actually make the decisions about features based on this data—the data alone isn't going to turn itself into requirements for us.

We'll also want to predict the viability of features we've been asked to implement. This task is difficult without data to support our hypotheses. For example, do we have any data on the environments in which our application will run? Simple data points can be enough to determine that a feature isn't worth implementing.

Data-driven features work from two angles, that is, the data we collect automatically, and the data we supply. Both are difficult to scale, and yet both are necessary to scale. The only real solution is to make sure that the number of features we implement are small enough in number, so that we can handle the amount of data generated by a given feature.

Competing with other products

Unless we're operating in a completely niche market, there's a good probability of competing products. Even if we are in a somewhat niche market, there's still going to be some overlap with other applications. There're a lot of software development firms out there— so we're likely to face direct competition. We compete with similar offerings by creating superior features. This means that not only do we have to keep delivering top-notch software, but we need to be aware of what the competition is up to, and what users of their software think. This is a limiting factor in our ability to scale, because we have to spend time understanding how these competing technologies work.

If we have a sales force out-selling our product, they're often a good source of information on what the other guys are doing. They'll often be asked by prospective customers if our software does such and such because this other application does it. Perhaps the most compelling selling point is that we can deliver that feature, and we can do it better.

This is where we must be careful, as this is yet another scaling factor that limits our ability to win customers. We have to scale to promises we make to existing and prospective customers. Promise too much, and we won't be able to implement the features, leading to disappointed users. Promise too little, or nothing at all, and we won't win customers in the first place. The best way to combat this scaling limitation is to ensure that those selling our product are kept well in touch with the reality of our software. What it can and cannot do, what's a future possibility versus impractical options.

To sell our product, there has to be some wiggle room for promising some things without understanding the full implications of implementing such promises. Otherwise, we won't get the customers we're after, because we're not generating any excitement around our product. If we're going to scale this approach to selling to new customers, we need a proven means to distill the promises into something that's achievable. On the one hand, we can't compromise the architecture. On the other hand, we have to meet somewhere in the middle to give the user what they need.

Modifying existing features

Long after we've successfully deployed our JavaScript application, we're still constantly refining the design of our code and the overall architecture. The only constant is change, or something to that effect. It takes a sizeable amount of discipline to go back and modify existing features of our software in an effort to improve the experience for users. The reason is that we feel more pressure from stakeholders to add new features. This presents a long-term scaling problem for our application because we can't add new features forever, without ever improving what's already in place.

The unlikely scenario is that there's no need to change anything; all our existing users are happy and they don't want us to touch anything. Some users are afraid of change, which means they like aspects of our software because we did a good job implementing them. We obviously want more features that are this good, by which, users are generally happy and don't see a need to improve.

So how do we reach this point? We have to listen to user feedback, and base our roadmap for modifying features on this feedback. To keep scaling along with our users and their demands, we have to strike a balance between implementing new features and modifying existing features. One way to check if we're moving in the right direction with feature enhancements is to broadcast the proposed changes to our user base. We can then gauge the feedback we get, if any. In fact, this might entice our otherwise quiet users to give us some specific suggestions. It's a way of putting the ball in the user's court—"here's what we're thinking, what do you think?"

Beyond figuring out what features to improve and when to improve them relative to implementing new features, there's the architectural risk. How tightly coupled is our code? Can we isolate a feature to the extent that there's no chance of us breaking other features? We're never going to completely eliminate this risk—we can only reduce coupling. The scaling issue at play here is how much time do we spend modifying a given feature due to re-factoring, fixing regressions, and so on? We spend less time on these activities when our components are loosely-coupled, consequently, we can scale our feature enhancements. From a management point of view, we always run the risk of blocking other people in the organization, through conflicts brought about by our changes.

Supporting user groups and roles

Depending on the type of business model we're following and the size of our user base, user management becomes a scaling issue for us because it touches every feature we implement. This is further complicated by the fact that the user management is likely to change just as frequently as the feature requirements are. As our application grows, we'll likely be dealing with roles, groups, and access control.

There are a lot of side-effects with complicated user management. The new feature we've just implemented may work perfectly fine initially, but fail in a number of other scenarios our production customers are likely to face. This means that we need more time dedicated to testing features, and the quality assurance team is probably already overwhelmed. Not to mention the additional security and privacy implications that arise from complicated user management in each of our features.

We can't really do much about complex user management schemas, as they're often symptomatic of the organization using the application, and its structure. We're more likely to face these types of complexities with on-premise deployments.

Introducing new services

There comes a point where the current back-end services no longer suffice for new features. We can scale our front-end development efforts better when there's very little dependency on the back-end. If that sounds counter-intuitive, don't worry. It's true that we need back-end services to carry out the requests of our users. So the dependency will always be there. What we want to avoid is changing the API unnecessarily.

If there's a way to implement the feature using existing APIs, we do it. This lets the back-end team focus on stability and performance by fixing bugs. They can't do that if the API constantly has to change in order to support our features.

Sometimes there's no getting around adding new back-end services. In order to scale our development process, we need to know when new services are necessary, and how to go about implementing them.

The first question is the necessity of the new service. Sometimes this is easy—it's not possible to implement the requested API. We'll have to make do with what's there. The second question is the feasibility of the new service. We'll likely form the shape of the new API since we're the ones who need it. Then we'll have to hear what the back-end team thinks. If we're a team with full-stack developers, there's less overhead because we're likely all on the same team and in closer communication with one another.

Now that we've decided to go ahead with the new API, we have to synchronize the implementation of our feature in the front-end, with the implementation of the feature in the back-end. There's no cut-and-dry solution here for us to follow, because the service could be easy or difficult to implement. Our feature could require several new services. The trick is reaching an agreement on the API and having a mocking mechanism in place. Once the real service is available, it's a time matter of disabling the mock.

However, in terms of scaling our application as a whole, this is just one integration point between the front-end features and back-end services. The implications of introducing the new feature, for the system, aren't known. We can only guess so much through testing and prior knowledge. It's not until production that we will see the full implication of how well our new feature scales. Different features that use the exact same service have different implications for request load, error rate, and so on.

Consuming real-time data

It's commonplace in JavaScript applications to have socketed connections to back-end data, in order to keep any user sessions synchronized with the reality. This simplifies some areas of our code while complicating others. The implications for scaling are substantial. Sending real-time data over web socket connections is what's called "pushing data". The prevailing technique prior to web socket connectivity was long-polling HTTP requests. This basically meant that instead of the data being delivered to clients when it changed, the client was responsible for checking if the data had changed.

The same scaling issues surrounding real-time data still exist today. With web socket technology, some of the burden has been shifted from our front-end code to the back-end. It's up to the application services to push web socket messages when relevant messages take place. There are a number of angles we need to look at here though. For example, does our architecture as a whole rely on the delivery of real-time data, or are we only considering real-time data for a single feature?

If we're considering introducing web-socket connectivity for the first time, to better support a new feature, we have to ask ourselves if it's something we want to fit into our architecture moving forward. The challenge with real-time data only affecting one or two features is a lack of clarity. Developers looking at one feature that has real-time data fed into it, versus another that does not, will have a hard time addressing things like consistency issues that arise over the course of developing our software.

It often makes more sense, and scales better from a number of perspectives, to properly integrate real-time data into the code of our front-end architecture. Which essentially means that any given component should have access to real-time data in the same way as any other component. As always though, the scaling issues we face when flowing top-down, from the user and their organization, ultimately determines the type of features we implement. This in turn influences the rate at which real-time data is published. Depending on the structure of our application, and how user data is connected, the frequency with which real-time data is delivered to each browser session can fluctuate dramatically. These types of considerations have to be made for every feature we implement.

Scaling features example

Our video conference software is popular with large organizations. Mainly due to it's stability, performance, and the fact that it's browser-based, without the need for plugins. One of our customers has requested that we implement chat utilities as well. They like our software so much that they'd rather use it for all real-time communication, and not just video conferencing.

The actual implementation of chat utilities at the JavaScript level wouldn't be too difficult. We would end up reusing several components that enable our web video conferencing functionality. A little re-factoring and we've got the new chat components that we need. But there're some subtle differences between text chat and video chat with regard to scale.

The key difference is the longevity of the text chats versus video chats, where the latter is generally a transient occurrence. This means that we need to figure out policies for persisting chats. Our video chats don't require user accounts to join, in case people want to invite people outside of the organization. This is different with text chats because we can't exactly invite anonymous actors, and then blow the chat away after they leave. We'll likely have other changes to make in our user management components as well. For example, do chat groups now correspond to video groups?

Since this is just one customer who's asked for this capability, we'll probably want a way to turn it off. Not only does this new feature have the potential to detract from our core value—video conferencing—but it can cause problems in deployments for other customers. With the new back-end services, the added interface complexity, and the additional training and support that's required, it's understandable that not all organizations would want this feature enabled. So if this isn't something we already have in our architecture, that is, the ability to turn components on and off, then that's something else that influences our ability to scale.