Technical

React.JS: The Marketing Implications of Modern Javascript Development

by on 13th July 2015

Recently I’ve spent a lot of time developing software and working with marketers. More recently, that’s included building a front end using React.JS.

The why behind React rather than Riot.JS or other similar software could itself consume an entire post, but for today we’re going to look at some thoughts on modern JS front ends and some ideas I hope will be of use to anyone planning on building something in React/similar.

Note: this post is mostly aimed at developers, but with a marketing bent. That said, if you’re a marketer or designer, you really need to understand this as well. Shout in the comments if you need anything explaining!

A Thoroughly Old-School Application

As a lot of the front end technologies we’re used to are growing up and maturing, we’re seeing a lot of old concepts being rediscovered. React itself is most well known for the virtual DOM and event system it employs. That concept came from graphics rendering pipeline theory. A lot of how it thinks about data, as one-way, initially declared and flowing changes through from start to finish is how a lot of old-school monolithic applications work. And beyond React, a lot of the work being done with JSON-based storage engines is around introducing schemas, isomorphic JavaScript is taking JavaScript to the server, allowing more desktop style live interaction but over the web, and so on.

Web applications are becoming more and more like traditional desktop applications, in that they’re a client talking in real-time to a data store. The only difference is that rather than the data being on a hard drive on the local machine, it’s some massive database on a server somewhere. Spotify is fundamentally an API delivering data to a client. Whether that client is an app on a phone, a JS powered web front end or a desktop client, is irrelevant really, but it’s interesting that two of those are programs running locally on an OS. Whether it’s an app or pure web experience, increasingly applications are abandoning the idea of a local store as a method of doing things. Facebook works the same way, as does Twitter, Skype, Gmail, Instagram, Netflix, PayPal and virtually every other database-backed app you’d care to think of. Even Windows and OSX are starting to get in on the game, with the ability to transition seamlessly between mobile and desktop clients.

The idea of a cloud-based repository for data, with fungible client front-ends is nothing particularly new; that’s a trend that has been fairly evident for a while now. But it does bring me to my first theory…

One-Way Data Flow > Two-Way Data Binding

From a development standpoint, having now used React for something big, I can only sympathise with anyone building anything of a similar scope in Angular etc. I may be wrong, but I suspect that increasingly people will use approaches where there’s a single “true” state of the data at any given time, and rather than trying to keep the front end and back ends in sync, applications will just flow changes back to the data store, and regenerate the client view based on that.

The other side of this (and this is something slightly specific to React), is that state becomes mostly a non-issue. You start out with nothing in particular, load your data up over XHR (see the next point), and then pass data as required using props. Managing data state in a single place at the top of your application means that you only have to debug data in a single place. It also makes your UI components far simpler, as they only have data or nothing, which makes rendering their output incredibly simple. If they have data, do a thing, otherwise, flow down an error code and show the appropriate error message. Your UI pieces become dumb code, treating sources of data as a black box, and all the complexity stays contained.

Oh, and as a result of this, you’ll find you’re using a Flux type architecture. Data > logic > views. Seem similar to anything you’ve come across before?

The End of Initial Data

The idea that you load HTML, which contains the initial state of the application, is on its last legs. Take a look at Facebook when you first load the news feed. You get lots of placeholders, into which data is poured. The reason for this is it’s much easier to deliver a small HTML payload initially, and then to flesh it out with data as that arrives in the form of JSON from a database, over an API. That way, you can show the user something very quickly (the HTML and CSS needed to render some initial state is probably pretty small, and if you’re not touching the database at that point, you’re just grabbing templates and those are pretty lightweight and fast to assemble).

With that in mind, I suspect also that JSON is only going to become more prevalent as the data format of choice for data transmission going forward. It’s just a much more pleasant way to structure data than XML, and makes more sense if everything is going to something that can unpack a JSON string into an array/object etc to be used by the client. Also, I can’t help but notice that in the applications I’ve been building lately, and been seeing turning up, that data is still mostly relational. So whilst I suspect the output will be JSON, I can’t see JSON-based storage formats becoming the majority any time soon.

Mongo, Elasticsearch etc are all great. But they only work well for OLAP-friendly storage needs. As soon as you get into OLTP systems, you’re going to end up in a mess if you’re using those types of data stores.

Once the front end has rendered, data gets imported, and that gets applied to the page. That saves a lot of initial wait time, as the user can see progress as it happens. It also means you can make lots and lots of very small requests, rather than getting all the data for an entire page in one lump. Take a look at this page from searchmetrics. Notice how it seems to load quickly, even though it’s not initially useful. If you had to wait for everything to arrive from the server before seeing a page start to render, you’d be waiting around ten seconds. That’d get annoying pretty quickly.

However, this is going to get even better in the near future. With the advent of HTTP/2, and specifically the fact that it uses multiplexing, we can avoid the problems of asking the user to wait while the browser chugs through loading all the assets required in batches. That means systems that use this load data after template rendering method will get even faster, with more connections available simultaneously.

SEO / Search Engine Indexing Implications

Well, I was a marketer for a long time, so this bit had to be coming really. With both of these, you’re making a really hard time for Google to understand what you’ve got going on. It’s entirely possible you’re going to end up with more than one screen per URL (although unlikely, if you’re not mad). The larger problem though is the old issue of not having anything to show until the page has rendered. I suspect longer term this will be solved through JSON-LD, along with other similar data tooling, combined with better crawling abilities from Google et al. For now, though, I’d strongly suggest looking at either rolling your own browser-based caching solution for delivering data to search engines, or using something like Prerender.

Analyse the Potential SEO Consequences Before You Migrate

Whatever framework you implement on the front end, I would offer you this one piece of advice. Try setting up a “vanilla” site on a domain and get it indexed by Google. Then, grab the server logs and learn what’s happening. Learning from this experience will arm you with all the information you need to cope with the realities of this largely unexplored area of technical SEO, before you’ve migrated.

Given the benefits of what you can build using tooling made this way, though, I suspect the benefit far outweighs any challenges you might face from a search front. The possibilities are just too great to let that be a serious constraint.

So with that all said, come back for part three, where we’ll put all this into practice, and build something a bit interesting.

Like this? Sign up to read more

Responses

  1. Very interesting post Pete. I like this side of Marketing cause I’m involved in product design & dev :D

    The Facebook example is an interesting one, but it also have to to with the “skeleton pattern” (coined by Luke Wroblewski in his “Avoid the spinner” post) that is not only used to improve the initial payload of sending the code but also too improve the Perceived Performance.

    Obviously the main technical reason is the one you mentioned, but there is a huge benefit also from a user perspective.

    Good job!

  2. This article is interesting for the reason that it actually *omits* why React is good for SEO. One of the key benefits (if implemented) is that React (coupled with Node) is isomorphic javascript (can be run client and/or server-side) otherwise known as progressive javascript.

    If developers take the approach of sending the initial server-side rendered HTML on the initial request, along with the payload that enables the client-side rich interaction, then you can have your Single Page Application speed and responsiveness, as well as managing requests from search engines and enabling crawlability without having to do any dirty secondary clean-up like headless-browser snapshots and the like.

    To be fair, there is some routing and heavy use of the History API required, and I have significantly understated (and possibly mis-stated) some of the technical elements, but the examples of React that I have seen growing have almost all taken advantage of this key aspect – blending rich UI with a crawlable architecture that makes both users and search bots happy.

    Did I mis-read that part of your article?

Comments are closed.