v3.0.0
As discussed in the last two talks, EXCMAscript is being improved on yearly cycle.
That means the ES Engines (thus browsers) are always trying to keep up**. While ES6/ECMAScript 2015 support is pretty great for Evergreen browsers, anything past that is dodgy.
And what if you have to support non-Evergreen Browsers?
** Browsers sometimes implement features well before they're approved leading to incompatible or broken implementations.
You've got basically two options:
Of course the third option is to not use new features, and while that's reasonable, many of the new features are uniquely new, or offer significant code readability improvements.
They're not perfect even when targeting ES6:
ES2016+ support isn't bad, considering:
Pretty poor so don't rely on these for compatibity.
This is "source to source" compilation because it's from one high-level language to another.
A common example would be the Sass compiler to go from SCSS to CSS.
Most work as part of the tool chain, but some can also work in the browser (which is slow, but OK for learning).
Sometimes they are configurable as to the capabilities or compatibility of the output.
Transpilers can't always compile all features or implementations; usually they need shims in the output environment to work.
A big problem can be debugging, because the target language may be unfamiliar, or the resulting code is incoherent to humans, or the structure may not correlate, so Source Maps may be generated which link target code with source code.
Some variants produce very "idiomatic" (meaning "natural") code which alleviates these issue.
You're probably already "compiling" from JS... as you're probably bundling modules and minifying.
You *will* be transpililing forever because ES will continue to evolve and engines will always lag.
Almost any language you can think of...
Including: PHP, Python, Ruby, Perl, Java, C/+/++/#, F#, Lisp, Haskel, Basic, Pascal, etc.
Can be compiled to JavaScript?!**
Polyfill is the just the name we give to Web technology shims.
This is library code that is included in the target code that "fills" in the "gaps" between the target code and a particular JavaScript engine's capability.
There's also many individual ES5/6/next/etc component polyfills that are useful if you just need that 1 or 2 newer features, or are concerned your code might run in an ES3 browser.
Try polyfill.io which will auto-magically determine the right polyfills and load them.
Most modern JS development involves use of some form of modules, often formalized into packages.
On a *nix system equivalents are pear, cpan, yum, apt, etc.
Package managers do at least one or more of the following:
You'll see some package managers create flat, deep, or somewhere in between, dependency trees. The dependency tree is all the packages required by all the other packages.
Generally you want wide for browser, deep for server.
Flat dependency trees are created because the manager has to figure out which 1 version of a package will satisfy all requirements. Sometimes this attempt fails and packages or dependencies have to be re-chosen.
A Deep dependency tree tries to find the "best" dependency version for every package at every level.
Most package managers work off one or more config files (often JSON or YML) which specify dependent packages.
With each package comes compatible, usually in SemVer syntax, versions. This syntax specify an exact version, or some sort of approximate version, like on patch or minor version.
With non-specific syntax you run the risk that a build on one system may be different from another, and this is called "non-deterministic".
Some package managers offer control over this, like npm's deteministic "shrinkwrap" or Yarn.
A defined, sensible way of maintaining version numbers. From the site semver.org, see it for more details:
Given a version number MAJOR.MINOR.PATCH, increment the:
- MAJOR version when you make incompatible API changes,
- MINOR version when you add functionality in a backwards-compatible manner, and
- PATCH version when you make backwards-compatible bug fixes.
Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.
Even with flat dependency trees, the amount of code can really add up.
Tree Shaking is removing code that isn't going to be executed from the deployed bundle.
As Front End applications get larger, more complex, and have more dependencies, there's some manual and (at least semi-) automatic methods to tackle that.
TS is included in some frameworks like Angular 2.
You're following best practices by keeping all your JS in small, organized files.
You bundle them up to ship to the browser.
Code Splitting is defining portions, sometimes called "chunks", of code to be loaded asyncronously on demand.
This isn't the best conceptual place to put this but it's messy...
This means HTML Templates (* and sometimes more) are converted to JavaScript during "compilation" (ie prior to running in a browser).
This is important because frameworks like Angular and React (using JSX) don't use pure HTML templates, and can identify errors at compile time.
Angular2 calls this AOT (Ahead of Time) compiling.
A very popular repository, handles acquisition & installing, and dependency management. Has nearly 300K modules in the registry, in CommonJS format.
Originally intended for Node it's now become the leading repository/package manager.
Node packages can contain almost any asset.
npm creates a deep dependency tree for each package and subpackage, ensuring proper versioning, but also often duplicating requires packages, sometimes numerous times.
Does not have it's own repo, nor does it acquire modules (nor their dependencies). It depends on (and works with) "nearly every 3rd party library" being in-place.
Works with ES6, CJS and AMD modules.
Targets Browsers, with an emphasis on big SPA (Single Page Apps).
It can replace task runners, with the exception of linting and unit tests, but can be difficult to configure.
Supports live and hot reload.
Controlled by a (declarative) config file, and is very opinionated.
Has a loader/plugin system to do almost anything, including things often handled by Task Runners.
Webpack can do bundling, and loading on demand.
Also supports Hot Module Replacement.
Seems to be the choice of React, and generally most popular, despite a seriously complicated config.
Bower is "unopinionated". It has a popular repository but also has "pluggable resolvers" which can acquire packages from npm, git, bitbucket, and pretty much any other repo.
It also handles dependency management, and installation. It's installed via npm.
Bower can handle packages of just about any asset.
It creates a flat dependency tree to minimize client-side transfer size, sometimes at the risk of the best versions.
Bower-related tooling can handle bundling and loading.
Does not have its own repository but works with npm and GitHub, referred to as "endpoints."
Works with ES6, AMD, CJS and Global modules.
Creates a flat dependency tree, and bundles as needed.
Loads SystemJS library for module loading.
Designed with HTTP/2 in mind.
Supports Hot Module Replacement - dynamically loading new code in the browser automatically, without losing state.
Seems to be the choice of Aurelia.
Does not have it's own repo, nor does it acquire modules (nor their dependencies.) It works with npm and CommonJS.
Targets Browsers.
Supports only Live Reload.
Has a plugin system to do almost anaything, but needs to be controlled by your choice of Task Runners.
Bundles up into a single file.
Losing out in popularity to Webpack?
These provide automation of the "tooling" around preparing modern dev assets into a run-ready state, such as "doing":
Linters flag erroneous or suspicious syntax or usage, and/or stylistic issues.
Probably the most controversial "rule" is around semicolons. ES doesn't need them, but without them you must ensure you don't use any syntax that will be parsed incorrectly. "Automatic Semicolon Insertion" is an actual spec'd function of ES.
Don't discount the important of stylistic linting (usually used in conjunction with a Style Guide). They help increase readability, clarity, re-use and debugging.
Not all these are available as linter plugins/styles. Some have IDE plugins or integration.
Often when used with various framework, or "full stack" products, they'll have their own:
There's many more...
Prettier is an opinionated code formatter.
Make a mess while writing your code and it cleans it all up for you.
Basically this includes both the conceptual and practical way your front-end talks to your back-end.
There's a distinction between components as a concept and components as implemented various way, including as "official" Web Components. As a concept:
How to get WC running live in a browser:
React, Angular, Aurelia, Ember, and other frameworks are build on a "component" structure but do not use W3C WC's.
Their ability to utilize or work with W3C WC's varies greatly.
FP programs are a sequence of stateless function evaluations, and tries to treat a programming function as a mathematical function, and less like a simple procedure.
Concepts included: First-class and higher-order functions, Immutability, Function Composition, Partial Application, etc.
You can do functional programming manually, or there's some libraries and frameworks to help:
RP is an abstraction for capturing and manipulating asynchronous data streams, and automatically "react"ing to changes.
Functional Reactive combines Function and Reactive paradigms. Reactive is inversion of control.
MV* means "Model View ... something". It's shorthand for a "family" of similar design patterns where implementations fall in a spectrum vs meeting strict definitions.
There's Model View [Controller | Presenter | View Module | Intent | Update], the related PAC, and probably others:
Some consider some MV* a code smell as it's too tightly coupled, with insufficient separation of concerns.
We may be in a post-MV* world now, as "component"-based development may our (present and) future.
To get a better understanding of MV*'s, see my "A Walk-through of a Simple JavaScript MVC Implementation" medium.com/@ToddZebert/a-walk-through-of-a-simple-javascript-mvc-implementation-c188a69138dc
At any given point in time, the values of the program's variables is its "state".
State discussed here is not so much an academic discussion, but rather the Model and how the how the Model changes.
State Management is part of MV* frameworks but with M-less frameworks getting popular (like React), a solution for complex applications was needed.
Flux and Redux (the most popular), along with MobX, Relay, Alt.js, etc., have arisen to fill that gap.
There are many solutions, some work together, or have other requirements:
Mocha and Jasmine are by far the most popular, but some are popular in conjunction with certain frameworks.
Allow discrete, stateless, usually short-running, code to be run in the cloud. It's the continued abstration of HW.
Usually event-driven from things like databases, or mobile or IoT devices.
Benefits:
Providers (to varying degrees):
Generally providers require no specific provisioning, but may have some "tuning" configurations.
Metered ($$) usage, but usually scales down to $0, and scales up practically limitless.
It's ready for production use.
Frameworks and libraries also exist to help:
JS' popularity has spread its use to unexpected places, and even "dragged" other web technologies with it.
Web Technologies have long been used to try to make mobile "apps"-like or even native apps.
Some present a webpage as a mobile app, others result in "native" app that hides the fact it's using a "web view" (browser), but at least you can download it from the App Store.
Below are a list of many solutions, followed by in-depth of a few.
A project by Github to create support Atom(editor), which became a general sollution to make desktop applications with web technologies.
It uses a modified versions of Chromium and Node to share a single V8 Javascript Engine to support both event loops! It's gained a lot of popularity and has generated quite an ecosystem of its own.
Just use Node, HTML, CSS, and whatever JavaScript based Front End framework of your choice.
Disadvantages include large file size (50/60MB+), performance, battery life, etc.
Successful examples include: OpenFin (a Financial "OS/VM"), Atom (IDE), Slack, Microsoft's Visual Studio Code, Postman.
PWA's are web sites that use (although new and evolving) standards to not only work as "regular" web sites, but also as a mobile apps.
To the user, the "site" has these app-like characteristics:
A common pattern is PRPL (often pronounced "Purple"): Push critical initial route resouces, Render initial route, Pre-cache remaining routes, Lazy-load routes as needed.
Tech used to make PWA's:
These technologies replace the depreciated "appcache" solution.
Google made Lighthouse to test PWAs and provide feedback developers.google.com/web/tools/lighthouse/
Also by Facebook, it extends React to run as a "real" native app that exposes native device components, APIs and modules to JavaScript. It also allows integration with native code written in Objective-C, Java, or Swift.
In some sense, it's like Cordova/PhoneGap taken one step further.
Because RN is different enough from React, the Holy Grail (nee!) of sharing components and code isn't really realized. So, this was born:
"React Native for Web" brings the platform-agnostic Components and APIs of React Native to the Web.
Originally by Progress, in 2015, like React Native (as compared to Ionic/Cordova), NativeScript does not use web views (basically the browser) on mobile devices - it uses native UI.
Works with either Angular/TypeScript, or JavaScript, or even Vue.
It's production ready and has a great "Showcase" of results.
While themselves Augmented Reality and Virtual Reality are still in their (modern) infancy, JavaScript implementations have not been far behind.
To get good smooth display the code needs to render 60 frames per second, or faster; jank is bad while scrolling but worse in AR/VR.
Build VR websites and interactive 360 experiences with React
It uses React concepts (although more similar to React Native), components and (optional, though recommended) JSX, Three.js, WebGL, and physical headsets via WebVR.
The VR can be experienced with a headset, or in a browser - even embedded on a webpage.
Although not an official "thing" a number of efforts to make "React AR" have been based on React, React Native or React VR.
WebVR is an open specification for web-based VR to a physical headset.
Support is scattered, and most headsets are only really supported on a single browser, like Oculus Rift & HTC Vive on Firefox (nightly), Windows MR headsets on Edge, or Daydream or Cardboard on Chrome.
There's even a WebVR polyfill, although it's very limited in support. There's also a simulation plugin for Chrome.
But, it's definitely experimental at this stage.
Efficient Augmented Reality for the Web - 60fps on mobile!
It uses webgl, webrtc, Three.js, and jsartoolkit5. Device/Browser support is very spotty, but it's been also shown to work on both HTC Vive and Hololens
But, there's a number of demos, articles and videos on it.
While JS' inherent async nature and deftness with data "streams" make it seem ideal, it's inability to deliver accurate repeatable timing, make it's use limited, although accessible to hobbyists.
Examples follow.
Lead Drupal Developer at @meetmiles