Continuing our series we come to a stand alone web component that I'll dig into over the next two posts. First, a simple text enhancement that's generalized for reuse (this one) followed adding upon that same generalized tag so that it meets a unique use-case in HAXcms.
Links
- Codepen demo with multiple tags
- Demo (glossary in next more impressive)
- npm install @lrnwebcomponents/enhanced-text
- microservice source
- web component source
What is the scope?
The goal was to create an element that could analyze text and augment it automatically. The first decision tree then becomes is this driven by Light Dom or by data / property? So obviously I chose both ;). No, it's a mix.
Light Dom / slotted content nodes were chosen to supply the body of text because:
- Content for our CMS implementations already existed
- Wrapping a tag on existing
<p>
/ normal HTML tags is highly semantic / progressive enhancement - We could maintain higher SEO by default using the textual content that exists and simply wrapping contents
That said, we also discussed that not everyone would want to "enhance" text with the options we had planned and that it needed to be opt in. Therefore a Boolean
would be needed as well as support for multiple forms of "enhancement".
- vide - support for augmenting text from this endpoint
- loading - status to visualize that something is happening
Additional considerations involved fallback support so that if augmentation from the microservice failed that it would still continue to render the slotted text. CSS / HTML then is very minimalist in this element beyond the spinning loader while
Demo codepen
Here's a demo showing several props and variations of the enhanced-text
tag including vide, wikipedia, glossary and toggling the auto
Boolean in order to enhance the text immediately vs on an event (clicking a button in the vide case).
enhanced-text web component
The web component is almost exclusively data and option driven so let's focus on how we obtain the data from the slot and make decisions about what options to request.
firstUpdated(changedProperties) {
if (super.firstUpdated) {
super.firstUpdated(changedProperties);
}
// automatic enhancement if set, otherwise manual
if (this.auto) {
this.enhance();
}
}
If we have auto
set to TRUE
then we automatically enhance
our element to go request it be processed. firstUpdated
life-cycle is similar to connectedCallback
but Lit async renders shadowRoots so this is the 1st time it's safe to query the information there and work with it.
Let's take a look at enhancement now, which is doing the heavy lifting.
// apply enhancement to text. if not in auto user must invoke this.
async enhance() {
const body = this.innerHTML;
this.loading = true;
if (this.vide) {
await MicroFrontendRegistry.call('@enhancedText/textVide', {body: body, fixationPoint: this.fixationPoint}, this.enahncedTextResponse.bind(this));
}
if (this.haxcmsGlossary && (this.haxcmsSiteLocation || this.haxcmsSite)) {
if (this.haxcmsSite) {
await MicroFrontendRegistry.call('@haxcms/termsInPage', {body: body, type: 'site', site: this.haxcmsSite, wikipedia: this.wikipedia}, this.applyTermFromList.bind(this));
}
else {
await MicroFrontendRegistry.call('@haxcms/termsInPage', {body: body, type: 'link', site: this.haxcmsSiteLocation, wikipedia: this.wikipedia}, this.applyTermFromList.bind(this));
}
}
// all above will run in order
this.loading = false;
}
Here we run through various options from our properties, leverage this.innerHTML
to obtain the body content to process, and let the MicroFrontendRegistry
calls ship data to and from the microservices.
Trust a bit too much for now..
There's not much going on with enhanceTextResponse which is the case that fires in a call to "vide" the text.
enahncedTextResponse(data) {
if (data.status && data.data && data.data.length) {
let parser = new DOMParser();
let doc = parser.parseFromString(data.data, 'text/html');
this.innerHTML = doc.body.innerHTML;
}
}
If our data from the micro gets poisoned (some how) then the DOMParser will at least handle the script
tag though this is a very minimal form of XSS and further work here to sure up the end point is needed for larger implementations that would need additional layers of security assurance.
vide endpoint
TextVide is an open source attempt at implementing a technique known as Bionic Reading a browser plugin that enhances blocks of text in order to help the user's eye focus on enough of the word to power skim.
The endpoint for this microservice is incredibly minimal (13 lines minus comments) as a result of all the work going on in the text-vide
package.
import { stdPostBody, stdResponse, invalidRequest } from "../../../utilities/requestHelpers.js";
import { textVide } from 'text-vide';
export default async function handler(req, res) {
// use this if POST data is what's being sent
const body = stdPostBody(req);
const text = body.body;
if (text) {
const highlightedText = textVide(text, { fixationPoint: body.fixationPoint ? parseInt(body.fixationPoint) : 4 });
res = stdResponse(res, highlightedText, {cache: 86400, methods: "OPTIONS, POST" });
}
else {
res = invalidRequest(res, 'missing `text` param');
}
}
We're hard-coding the fixation point (4) for now but it can accept a variable to modify. We still need to roll this out to end users with real content to see if it is valuable and what the best method of enhancement as far as the fixation point goes.
In the next article I'll cover how this same element is able to be wired up to our HAXcms system in order to supply glossary terms enhanced in context. This is our primary target, allowing students to have quick references to definitions in the event they forget a term and how it's being used in a course. However, we don't want our faculty (end user building the content) to have to recall all the places a term is used or need to maintain and manage the definition once they write it.
Top comments (0)