Electron Breaks Brain

Trying to get Electron setup has made me realize I don’t understand so many aspects of my build and deploy and hosting setup.

Showing anything

I will just let my comments speak for themselves right now rather than try to re-type out the long story of my confusions and failures.

// mainWindow.loadFile("dist/index.html"); // doesn't work (can't find rest)
// mainWindow.loadURL("http://localhost:5173/play"); // works, though requires dev server
// mainWindow.loadFile("index.html"); // doesn't work (uses top-level index.html, even if this file (main.js) is in dist/)
// mainWindow.loadURL(`file://${import.meta.dirname}/dist/index.html`); // doesn't fix it.
// mainWindow.loadFile(fileURLToPath(new URL("./dist/index.html", import.meta.url))); // doesn't fix it
// TODO: try vite base config https://vitejs.dev/config/shared-options.html#base
// or other options claude came up with
// maybe a separate build and copy all files into electron-dist/ or something?
// mainWindow.loadURL("/"); // for react router? nope
// mainWindow.loadURL("/play"); // for react router? nope
mainWindow.loadFile("index.html"); // showing error page

Aside: it’s so crazy to spend days and days and days on this so that, MY END GOAL, like my ideal goal here, is to replicate exactly what I already have working on my website, into a 100MB executable. Just so that it can go into Steam, because people look at Steam.

Serving URLs

After giving up (hopefully temporarily) on running “directly” with some kind of yarn run electron ., I tried using electron-build, copying everything, and loading the app.

Lo and behold, I get the stylized app showing me a special new error I hadn’t seen.

Opening the console to see the full error:

Error: No route matches URL "/Users/max/repos/ttmh/frontend/electron-dist/mac-arm64/ttmh-frontend.app/Contents/Resources/app/index.html"

So… something with the router. I think. That’s react-router(-dom).

The internet said I must switch from BrowserRouter to HashRouter, which I don’t want to do, because

I tried interrogating Claude01 on whether I really needed to do this. I mistrust both webdev StackOverflow and LLMs. It started telling me about Electron’s special implementation of the file:// protocol. And despite that Electron’s chrome(ium) conforming to the HTML(5?) history API (edit: or not?), because the app sends a request to the server for every page expecting to use http(s), the use of file:// screws things up (edit: even when not sending requests, I think it still wouldn’t work. But that was also happening).

Wait… does my app actually send a request to the server for every route change?

Because vite doesn’t log anything when you run the dev server, I never actually knew when something made a request to the frontend server. And because Cloudflare Pages always just worked, I never cared.

I fired up a python server in my dist/ directory

python -m http.server 4000

Home page: beautiful, everything looks exactly perfect.

Then I click on a single URL and I get a total and utter 404 from, I assume, python’s server:

I can’t believe it. So the whole “local, client-side routing in react-router(-dom)” is actually doing server requests each time, reloading the same page, and then doing some kind of magic to display the correct one?

So wait, how does Cloudflare actually serve pages? Is it just sending index.html every time to every route? It can’t be. How would it know to do that?

Yes, it’s doing exactly that. Quoth the docs:

Single-page application (SPA) rendering

If your project does not include a top-level 404.html file, Pages assumes that you are deploying a single-page application. This includes frameworks like React, Vue, and Angular. Pages’ default single-page application behavior matches all incoming paths to the root (/), allowing you to capture URLs like /about or /help and respond to them from within your SPA.

Well I’ll be.

So now, do I

Well, one interesting note is that if I visit index.html on my local python server, I actually get

Error: No route matches URL "/index.html"

So hitting index.html should never work in the first place. This might not change my problem at all, but somehow it’s comforting to hit the same react-router(-dom) error in electron as with my python server.

I think what’s confusing me is the very basic first page in React Router’s docs says:

Client side routing allows your app to update the URL from a link click without making another request for another document from the server. Instead, your app can immediately render some new UI and make data requests with fetch to update the page with new information.

So I must be terribly screwing this up.

OK but regardless, I can only even consider doing it properly if I get something displaying on the page. (Though if I can get something displaying on the page… maybe who cares about doing it properly.)

OK yeah of course I can’t resist checking this

// current
<a href={nl.url} className="hover:underline">
{nl.display}
</a>

// how i'm supposed to be doing it
<Link to={nl.url}>{nl.display}</Link>

wow, this is… what. So both links, if I hover over them, show this in the bottom of the web page.

Using my python server because it (appropriately) refuses to serve anything but /,

So I guess I’ve been missing out on all the react-router magic speed this entire time.

Now, I’m still tempted to have different pages because I think it’s good for SEO or something. But wow, holy cow.

I’m a bit nervous because I think some logic in my game breaks if I don’t get to hard refresh the page sometimes (e.g., nav away from game, nav back, state is weird). So I’m not sure I can entirely live without hard refreshes.

My question now is: how were the <a> links working at all? React router must somehow be intercepting the same old index.html reply my (ahem, Cloudflare’s) server is sending back the whole time, check the browser address bar, and then swap in the correct page without me even realizing I was doing it wrong.

OK, on second inspection, the local Link is not quite instant. Though, vs requesting from a server, it’s probably still faster, right? But the other thing I noticed is there’s some flashing happening. I recorded a video of switching between them and it’s just the empty page (w/ the circuit background) flashing between them; i.e., the absence of the content box. (Could I change this?)

I’m too curious—what is that <Link> actually doing? Can I inspect it?

<a class="hover:underline" href="/changelog">Changelog</a>

You cannot be serious. It’s just an <a>???? There’s no JavaScript??? What is happening?!?!?!?!?!?

I think I need to go lie down. And then read the React Router docs again. Or not, and just keep googling how on earth it’s so hard for me to get my attempt at the most boring, typical setup to work.

Command-clicking a link to open in a new tab does hard-load the URL (as evinced by my python server still breaking in that case).

https://reactrouter.com/en/main/start/concepts#link tries to explain this, but I don’t know how they intercept the browser’s default behavior only in the exact cases they want.

Update from future Max: I think it probably attached a normal old onClick handler and did an event.preventDefault, it’s just that onClick isn’t rendered as an HTML attribute so I didn’t see it. So it’s probably not that crazy. Good reminder to come back to stuff the next day when it’s past 7pm.

All right, checking this out next: https://github.com/daltonmenezes/electron-router-dom

I think it just uses a Hash Router. All the extra code is to handle multiple windows.

export function Router(routes: RouterProps) {
...

return (
<HashRouter basename={windowID}>
<Routes>{transformedRoutes[windowID]}</Routes>
</HashRouter>
)
}

The electron-vite project agrees with this too:

Electron does not handle (browser) history and works with the synchronized URL. So only hash router can work. … For react-router-dom, you should use HashRouter instead of BrowserRouter.

Source

I harassed Claude, Googled, and read Electron’s docs for a while longer to try to figure out exactly what aspect of Electron makes it not support the normal browser history, but I couldn’t understand it. So I’m going to just give up and try out the Hash Router. But first I need to try to have multiple configs with Vite.

Multiple Vite Configs

It is nice that playing with related tech in your free time can help your main project.

The other day I was trying to get a new repo set up using Vite instead of Webpack because using Webpack makes me want to set my desk on fire. But it was one of my “experiment” repos where I want to have a single build config but many sets of input→output. In other words, each experiment is fully separate—HTML, source TypeScript, the works—so that I don’t have to worry about refactoring as I learn new things and change everything. (I guess maybe they can share generated Tailwind CSS, that’s fine.) But I don’t want to set up a new build config or new repository each time.02

The solution to this was sitting right in front of me the whole time (“multi-page app”), but I spent enough time confused and reading about Vite and trying things that I feel a tiny bit more confident with, e.g., using multiple configs.

So I’m going to try splitting vite.config.ts into multiple options

… and then passing the config on the command line.

Spent some time trying to figure out how to send different environment variables for web/electron. I think if I don’t want to prefix them LIKE_THIS=here ... in the build commands then it’s better to just have whole separate file sets. A bit ugly copy-pasting dev/prod configs like this, but probably better than these critical variables living in strings in the package.json scripts.

Then spent some time trying to name things. Specifically the axis of web vs electron. Many other terms were taken. I came up with “deployment:”

name owner examples
mode vite development, production
target vite modules, es2020, chrome87, firefox78
deployment me web, electron
platform electron-builder win, mac, linux

Refs:

Have started a deployment.tsx file that contains the implementation differences between deployments. Starting with just the HashRouter. We’ll see if this one file is enough, or whether something like code splitting will necessitate more over-engineering.

And… it works.

Wow, so switching to the HashRouter just super worked. Can’t login yet because that’s going to be a whole thing, but pages using <Link> can go to each other. All the vanilla webpages render.

Main glaring thing to fix is raw resource URLs, specifically images that start with /img/.... A bunch of these are sent from the server, too, for the game. Going to hold off on fixing the home page ones because the electron app might not have the home page at all. Playing the game is going to be the next main thing.

But just before that, I’d like to try out a super simple preload context bridge, inspired by the vite-electron-builder repo. Exporting the node process versions, and as a stretch goal, trying IPC.

Actually, scratch that, the vite-electron-builder has disabled the renderer sandbox. I’m still fuzzy on the preload vs renderer context, but a comment in their code says “Sandbox disabled because the demo of preload script depend on the Node.js api.” I have no reason to do this, so I’ll try some simple IPC instead.

Just kidding lol

Let’s try out IPC

https://www.electronjs.org/docs/latest/tutorial/ipc

What’s funny is the tutorial has one way of setting up IPC in the main process

function createWindow() {
// ...

ipcMain.on("set-title", (event, title) => {
const webContents = event.sender;
const win = BrowserWindow.fromWebContents(webContents);
win.setTitle(title);
});
}

… but then when they go to explain it, they immediately do it differently:

function handleSetTitle(event, title) {
const webContents = event.sender;
const win = BrowserWindow.fromWebContents(webContents);
win.setTitle(title);
}

app.whenReady().then(() => {
ipcMain.on("set-title", handleSetTitle);
// ...
});

So whatever, hoping that doesn’t matter.

I was having trouble where the IPC thing wasn’t being exposed in the renderer (window.electronAPI was just undefined). It occurred to me that I have no idea how to debug stuff in the main (or preload?) code. The prospects in the guide look grim:

https://www.electronjs.org/docs/latest/tutorial/debugging-main-process

I guess you have to expose a port for the “V8 inspector protocol” and then attach something to it that understands it?

Thankfully just specifying the directory of the preload script seemed to fix it:

{
// ...
preload: join(import.meta.dirname, "electron-preload.js");
}

… because at least now I’m getting an error. But this error is what’s baffling me, and why I turned back to this devlog.

Unable to load preload script: /Users/max/repos/ttmh/frontend/bin/mac-arm64/ttmh-frontend.app/Contents/Resources/app/electron-preload.mjs
SyntaxError: Cannot use import statement outside a module
at runPreloadScript (VM4 sandbox_bundle:2:83494)
at VM4 sandbox_bundle:2:83813
at VM4 sandbox_bundle:2:83968
at ___electron_webpack_init__ (VM4 sandbox_bundle:2:83972)
at VM4 sandbox_bundle:2:84095

The key here is Cannot use import statement outside a module.

But I don’t understand why it doesn’t think it’s a module.

Do I make this one file a require() style??

There’s a terrifying long (unresolved) thread in electron forge that doesn’t not make me think that may be required: https://github.com/electron/forge/issues/2931

OMG that was it. I guess preload can’t be an ES6 module in this context? The vite-electron-builder template does change lots of default settings, like sandbox: false, which may have been allowing import to work there.

Updates from working with Greg

Electron cookies: the good

cookies, in general, are cool, because the following works:

  1. (Chrome) If you manually set the (first party, session) cookie, are you suddenly logged in?

    • yes
  2. (Electron) If I save a cookie, can I see it in dev tools?

    • yes
  3. (Electron) Will the (electron) browser then use it for requests?

    • yes

If I set the following cookie (in Electron’s main process) and hit the production website, it all works and the game can be played (first time ever!):

const cookie: CookiesSetDetails = {
url: "https://talktomehuman.com",
name: "session_jwt",
value: "(REDACTED DUH)",
domain: "talktomehuman.com",
path: "/",
secure: true,
httpOnly: true,
// expirationDate: // TODO: maybe set (unix seconds), maybe OK (expires w/ session)
sameSite: "lax",
};
await session.defaultSession.cookies.set(cookie);

However…

Electron cookies: the bad

Summary:

… the above has stringent requirements:

Error: Failed to set cookie - The cookie was set with an invalid Domain attribute.

Some additional info found in a few places:

https://github.com/electron/electron/issues/4422#issuecomment-182618974

Seems to indicate URL/domain cookie matching guidelines (? rules?) which… we’d totally violate w/ file:// and on a different domain.

https://github.com/electron/electron/issues/27981

https://github.com/electron/electron/issues/22345#issuecomment-1057130188

Some code (and refined by the next two comments) that intercepts web requests, checks the headers, and modifies cookie parameters on the fly. Probably not necessary since I control the backend, too, but good to know such hackery is (or at least was, in 2020) possible.

Electron: setting a cookie with file://

  1. The first answer, by the author in 2018, registers a new scheme (app://) to overcome the file:// limitation. But a comment from 2019 says this no longer works. The fact that it doesn’t work checks out with what I learned above (GH issue from 2021 that cookies aren’ supported on custom protocols). (see third answer)

  2. The second answer is useless

  3. The third answer was from the 2019 commenter, but they probably just changed some configs and ended using a custom protocol (app://) that presumably had cookies working for them. (I’m not sure exactly when it stopped working.)

  4. The fourth answer (2023) is probably the most helpful. They outline two scenarios:

    1. Hitting a remote URL (i.e., using the standard http(s):// scheme), you can set Cookies using Electron’s API—at least, for that particular domain

      • This is true; I verified it in the section above
    2. Hitting a local URL (i.e., using file://) and then trying to make a fetch call to a server, they had trouble trying to “set or retrieve cookies.”

      • However, from what I understand of Chrome / web browsers, it makes sense that you wouldn’t be able to simply set cookies from HTTP headers in the normal flow, because (a) we’re on file://, (b) the origin is different
      • So for me, the interesting question is whether we could set them in a privileged way (e.g., in the main process) and then have the browser simply use them—send them with requests. (Guessing no.)
      • Their solution was to reduce security settings (enable node integration; disable context isolation), then use Node’s HTTP module to manually make fetch calls, constructing the HTTP headers w/ cookies by hand.
        • At this point, I’d might as well just use bearer auth.

https://developer.mozilla.org/en-US/docs/Web/WebDriver/Errors/InvalidCookieDomain

This is Mozilla’s documentation on WebDriver, so it’s not necessarily the same in Chromium or Electron. However, some concepts seems relevant:

high level Q to electron discord

Ahh, discord and/or github discussions, where your problems go to die silently.

hey all, i have a very high-level question. do major electron apps like slack, discord, etc.:

  1. package their HTML files locally, and run over file://?
  2. package their HTML files locally, and run a local webserver (http://localhost:...)?
  3. package their HTML files locally, and use a custom protocol (e.g., app://)?
  4. just open a browser window to their remote websites and run over https://?
  5. Something else?

For context, I’ve been working on method 1: trying to get a relatively simple app to run over file://. But I seem to constantly run into issues and rough edges, like routing and cookies. I’m wondering whether anyone knows how the major apps run?

next steps

two main paths forward from where I stand:

  1. send auth headers directly w/ bearer auth in fetch() calls

    • would the lack of cookies have issues w/ redirects and/or the loader logic in the router? i.e., with the –> /login redirects?
    • maybe not, because we’re always able to GET index.html, the only webpage on the frontend. I think all interesting calls must be manually fetch()d w/ cookie auth to backend
    • issues in public/ still remain? how hard to address?
  2. abandon a local build entirely and use a local web server or browser window to the remote site

    • potentially include a local landing page for the auth flow?

I’m still tempted to try 1, probably because I’m a fool. It might be worth sketching out what 2. would look like in more detail.

Investigating Slack

Slack is a legit Electron app. (One of the first?) Do they package all assets locally and run on file://, or another protocol? Or are they just serving their website from an Electron shell?

SLACK_DEVELOPER_MENU=true open -a /Applications/Slack.app

some assets exist in bundle locally, but appears to load page and tons of assets from URL (the only document is from https://app.slack.com/client/XXXXXXXXX/YYYYYYYYYY)

however, if you open without internet, it still works! even reloading is fine. however, force reloading displays a blank screen. but then getting back online doesn’t fix it without a full reboot.

Note from investigating Service Workers later:

“If you force-reload the page (shift-reload) it bypasses the service worker entirely. It’ll be uncontrolled. This feature is in the spec, so it works in other service-worker-supporting browsers.”

source

checking “Disable cache” and reloading keeps the layout. However the network tab indicates the document involves (was served by?) a “Service Worker” (web thing I’ve heard of but don’t know anything about). And if you look into the details, it says

Source of response: ServiceWorker cache storage

… so it might be that the doc is retrieved dynamically by a service worker which has its own caching logic that circumvents the “Disable cache” check.

Other notes:

Looks like they may have written about this: https://slack.engineering/service-workers-at-slack-our-quest-for-faster-boot-times-and-offline-support/

“When you first boot the new version of Slack we fetch a full set of assets (HTML, JavaScript, CSS, fonts, and sounds) and place them in the Service Worker’s cache. We also take a copy of your in-memory Redux store and push it to IndexedDB. When you next boot we detect the existence of these caches; if they’re present we’ll use them to boot the app. If you’re online we’ll fetch fresh data post-boot. If not, you’re still left with a usable client.”

So it’s literally a chrome shell w/ a service worker script (if that! honestly it is probably loaded too).

Also interesting:

Note that most binary assets (images, PDFs, videos, etc.) are handled by the browser’s cache (and controlled by normal cache headers). They don’t need explicit handling by the Service Worker to load offline.

I still wonder why so many assets were bundled.

Service Workers are reinstalled every time their script changes. They update their service worker every time any asset changes as well:

“Every time we update a relevant JavaScript, CSS, or HTML file it runs through a custom webpack plugin that produces a manifest of those files with unique hashes (here’s a truncated example). This gets embedded into the Service Worker, triggering an update on the next boot even though the implementation itself hasn’t changed.”

It’s kind of interesting. It feels wrong. Like ultra heavy-handed cache invalidation.

More notes:

On caching API responses:

"A common workflow at Slack is to release new features alongside corresponding changes in our APIs. Before Service Workers were introduced, we had a guarantee the two would be in sync, but with our one-version-behind cache assets our client was now more likely to be out of sync with the backend. To combat this, we cache not only assets but some API responses too.

The power of Service Workers handling every network request made the solution simple. With each Service Worker update we also make API requests and cache the responses in the same bucket as the assets. This ties our features and experiments to the right assets — potentially out-of-date but sure to be in sync."

(emphasis theirs)

I don’t fully understand. Need example.

One more interesting thing: what’s embedded at the top of the page (<script> and <style> in <head>). These are presumably meta observability stuff (profiling and tracking), bootstrapping, offline detection, and barebones skeleton setup.

  1. script: primarily haveAssetsLoaded() and reloadIfIssue()

    • utilities for
      • timing loads and logging
      • sending a beacon to servers if trouble loading
      • showing trouble loading overlay over whole window (document)
    • the “trouble loading” detector is nice:
      • show the trouble loading overlay
      • set “online” listener that simply reloads the page
    • haveAssetsLoaded(): checking whether assets (CSS / JS) all loaded
    • reloadIfIssue(): checks assets, hooks up online listener, reload retry logic, CDN fallbacks, and clearing cache / service worker
  2. style for trouble loading overlay

  3. script: hooking up profiling for long tasks

  4. script: dark mode w/ hardcoded CSS (probably to prevent a bright flash)

  5. script: setup a global error handler that records all errors and sends them to slack

  6. script: grab the CDN location

  7. script: preload fonts by creating links to them and setting preload + prefetch

  8. script: mysteriously set jLen = 4 and cLen = 3 (IIRC these are lists of JS and CSS assets)

  9. (+10, 11) script: w/ profile, force load + inject 3 (cLen?) stylesheets (“slack kit”, “boot”, “helper” styles) into head

Scripts in body:

  1. translate the trouble loading error message
  2. download localization JSON files
  3. webpack shims and sets up enormous asset lists (json, css, img, animations, and probably all the JS too)
  4. (+ onwards) load manually specified JS files, I think these are the ones referred to by jLen

I’m not positive exactly where all the loading is kicked off, but I didn’t search for it, and there’s a lot of minified code.

I’m not even sure whether they have an initial index.html inside. app.asar is 12 KB. It contains:

OK cool, let’s check out my app-arm64.asar. It has

The code bundles in dist/:

And there are 3 HTML files:

Tracing code path:

  1. boot.bundle.js (obfuscated)
    • seems like utils
    • one interesting thing, I do see slack://; so that’s one potential protocol
    • references ./main.bundle.js
  2. main.bundle.js (obfuscated)
    • does reference app://resources/${o}.html, which is interesting because
      • another protocol?
      • what goes in there appears to be top-level windows that are created: the “about slack” box, a “basic auth view”, the main window (I think), a log viewing window, and a settings editor
      • there’s no HTML files in resources/. probably downloaded. or elsewhere?
      • since it’s app://resources/, the code is handling where to find it, so it could be saved anywhere
    • I also see app://resources/notifications.html
    • I see what looks like the electron window creation section, which seems to specify:
      • preload: the component-preload-entry-point.bundle.js script
      • URL: I think it’s the app://resources/${o}.html.toString()
        • o is inspected and used to set the windowOptions argument

Not other concrete pointers to HTML files, so I think it’s time to check out the ones we have.

Based on the names, basic-auth-view is the best candidate for an opening view. It has an empty HTML <body> tag. Of note:

renderer.vendor.bundle.js

basic-auth-view.bundle.js

Might end my exploration now, because there’s more stuff (e.g., searching “magic link”) reveals a bunch more in component-preload-entry-point.bundle.js (and the main bundle, and the magic link bundle).

One other fruitful exploration might be the ipcRenderer (occurs in a bunch of places)

Overall, this was very educational. My takeaway re: loading is:

  1. slack definitely loads a huge amount of stuff from their servers
  2. the service workers update the app in the background as you run it (i.e., you never update the electron app itself) by downloading and caching new versions of all the assets

Interestingly, I still don’t have a super solid understanding of how we get

The working single main HTML page is gotten from a Service Worker on load. On force reload (“force cold boot” in their terms), it is retrieved from https://app.slack.com/client/XXX/YYY. The enormous mess of JS is server (edge) loaded, then Service Worker-loaded, too.

Is the electron stuff (main and preload) able to be updated with the Service Worker? Probably not, I would guess. I wonder if it can self-update, or if it actually downloads new app versions.

Remaining questions:

Investigating file:// usage in the ASAR-bundled code:

On the other hand, we do see (main.bundle.js)

Some interesting bits found around the protocol handling (in main.bundle.js):

It’s still unclear to me whether file:// is used at all, but it certainly seems app:// is used. I don’t recall so far whether a custom protocol has any benefit over file://, though it might. Minimally, wrapping allows intercepting all requests yourself, so I could see that alone as a justification.

If I had to guess, I would say that file:// is not used.

So I’d guess they register app:// early on, load the initial auth views using it, and then redirect to the full https:// view. I’m not positive that I need to do this rather than use file://, but it’s helpful to see.

Service Workers

(NB: chronologically done in mid section of Slack investigation above)

Checking out https://developer.chrome.com/docs/workbox/service-worker-overview

“Service workers are specialized JavaScript assets that act as proxies between web browsers and web servers. They aim to improve reliability by providing offline access, as well as boost page performance.”

The term “asset” freaks me out.

Visiting chrome://serviceworker-internals/, I have 545 of them. 3 are running on that internal page, all chrome extensions. The rest are stopped.03

“An indispensable aspect of service worker technology is the Cache interface, which is a caching mechanism wholly separate from the HTTP cache.”

and, from MDN:

“The Cache interface provides a persistent storage mechanism for Request / Response object pairs that are cached in long lived memory.”

https://developer.mozilla.org/en-US/docs/Web/API/Cache

So this is all checking out.

Service workers can intercept network requests. Since the Cache is for (Request, Response) pairs, this makes sense. It’s pretty interesting though.

Some more notes:

Methods

sync async
preload send() invoke()
main on() handle()

Footnotes


  1. All the gpt-4s have gotten stupider these days, to my dismay. ↩︎

  2. After writing this out, I realize a new vite config for each experiment probably would have been fine… though I guess running them all would have been annoying, maybe? ↩︎

  3. Aside, it’s always a bit annoying that chrome extensions are identified as gibberish chrome-extension://schlabadadobabambado. Each one has their URL listed 4 times and zero mention of its name. ↩︎