Recently I’ve spent a lot of time developing software and working with marketers. More recently, that’s included building a front end using React.JS.
The why behind React rather than Riot.JS or other similar software could itself consume an entire post, but for today we’re going to look at some thoughts on modern JS front ends and some ideas I hope will be of use to anyone planning on building something in React/similar.
Note: this post is mostly aimed at developers, but with a marketing bent. That said, if you’re a marketer or designer, you really need to understand this as well. Shout in the comments if you need anything explaining!
A Thoroughly Old-School Application
Web applications are becoming more and more like traditional desktop applications, in that they’re a client talking in real-time to a data store. The only difference is that rather than the data being on a hard drive on the local machine, it’s some massive database on a server somewhere. Spotify is fundamentally an API delivering data to a client. Whether that client is an app on a phone, a JS powered web front end or a desktop client, is irrelevant really, but it’s interesting that two of those are programs running locally on an OS. Whether it’s an app or pure web experience, increasingly applications are abandoning the idea of a local store as a method of doing things. Facebook works the same way, as does Twitter, Skype, Gmail, Instagram, Netflix, PayPal and virtually every other database-backed app you’d care to think of. Even Windows and OSX are starting to get in on the game, with the ability to transition seamlessly between mobile and desktop clients.
The idea of a cloud-based repository for data, with fungible client front-ends is nothing particularly new; that’s a trend that has been fairly evident for a while now. But it does bring me to my first theory…
One-Way Data Flow > Two-Way Data Binding
From a development standpoint, having now used React for something big, I can only sympathise with anyone building anything of a similar scope in Angular etc. I may be wrong, but I suspect that increasingly people will use approaches where there’s a single “true” state of the data at any given time, and rather than trying to keep the front end and back ends in sync, applications will just flow changes back to the data store, and regenerate the client view based on that.
The other side of this (and this is something slightly specific to React), is that state becomes mostly a non-issue. You start out with nothing in particular, load your data up over XHR (see the next point), and then pass data as required using props. Managing data state in a single place at the top of your application means that you only have to debug data in a single place. It also makes your UI components far simpler, as they only have data or nothing, which makes rendering their output incredibly simple. If they have data, do a thing, otherwise, flow down an error code and show the appropriate error message. Your UI pieces become dumb code, treating sources of data as a black box, and all the complexity stays contained.
Oh, and as a result of this, you’ll find you’re using a Flux type architecture. Data > logic > views. Seem similar to anything you’ve come across before?
The End of Initial Data
The idea that you load HTML, which contains the initial state of the application, is on its last legs. Take a look at Facebook when you first load the news feed. You get lots of placeholders, into which data is poured. The reason for this is it’s much easier to deliver a small HTML payload initially, and then to flesh it out with data as that arrives in the form of JSON from a database, over an API. That way, you can show the user something very quickly (the HTML and CSS needed to render some initial state is probably pretty small, and if you’re not touching the database at that point, you’re just grabbing templates and those are pretty lightweight and fast to assemble).
With that in mind, I suspect also that JSON is only going to become more prevalent as the data format of choice for data transmission going forward. It’s just a much more pleasant way to structure data than XML, and makes more sense if everything is going to something that can unpack a JSON string into an array/object etc to be used by the client. Also, I can’t help but notice that in the applications I’ve been building lately, and been seeing turning up, that data is still mostly relational. So whilst I suspect the output will be JSON, I can’t see JSON-based storage formats becoming the majority any time soon.
Mongo, Elasticsearch etc are all great. But they only work well for OLAP-friendly storage needs. As soon as you get into OLTP systems, you’re going to end up in a mess if you’re using those types of data stores.
Once the front end has rendered, data gets imported, and that gets applied to the page. That saves a lot of initial wait time, as the user can see progress as it happens. It also means you can make lots and lots of very small requests, rather than getting all the data for an entire page in one lump. Take a look at this page from searchmetrics. Notice how it seems to load quickly, even though it’s not initially useful. If you had to wait for everything to arrive from the server before seeing a page start to render, you’d be waiting around ten seconds. That’d get annoying pretty quickly.
However, this is going to get even better in the near future. With the advent of HTTP/2, and specifically the fact that it uses multiplexing, we can avoid the problems of asking the user to wait while the browser chugs through loading all the assets required in batches. That means systems that use this load data after template rendering method will get even faster, with more connections available simultaneously.
SEO / Search Engine Indexing Implications
Well, I was a marketer for a long time, so this bit had to be coming really. With both of these, you’re making a really hard time for Google to understand what you’ve got going on. It’s entirely possible you’re going to end up with more than one screen per URL (although unlikely, if you’re not mad). The larger problem though is the old issue of not having anything to show until the page has rendered. I suspect longer term this will be solved through JSON-LD, along with other similar data tooling, combined with better crawling abilities from Google et al. For now, though, I’d strongly suggest looking at either rolling your own browser-based caching solution for delivering data to search engines, or using something like Prerender.
Analyse the Potential SEO Consequences Before You Migrate
Whatever framework you implement on the front end, I would offer you this one piece of advice. Try setting up a “vanilla” site on a domain and get it indexed by Google. Then, grab the server logs and learn what’s happening. Learning from this experience will arm you with all the information you need to cope with the realities of this largely unexplored area of technical SEO, before you’ve migrated.
Given the benefits of what you can build using tooling made this way, though, I suspect the benefit far outweighs any challenges you might face from a search front. The possibilities are just too great to let that be a serious constraint.
So with that all said, come back for part three, where we’ll put all this into practice, and build something a bit interesting.