Electron Breaks Brain
Trying to get Electron setup has made me realize I donât understand so many aspects of my build and deploy and hosting setup.
Showing anything
I will just let my comments speak for themselves right now rather than try to re-type out the long story of my confusions and failures.
// mainWindow.loadFile("dist/index.html"); // doesn't work (can't find rest)
// mainWindow.loadURL("http://localhost:5173/play"); // works, though requires dev server
// mainWindow.loadFile("index.html"); // doesn't work (uses top-level index.html, even if this file (main.js) is in dist/)
// mainWindow.loadURL(`file://${import.meta.dirname}/dist/index.html`); // doesn't fix it.
// mainWindow.loadFile(fileURLToPath(new URL("./dist/index.html", import.meta.url))); // doesn't fix it
// TODO: try vite base config https://vitejs.dev/config/shared-options.html#base
// or other options claude came up with
// maybe a separate build and copy all files into electron-dist/ or something?
// mainWindow.loadURL("/"); // for react router? nope
// mainWindow.loadURL("/play"); // for react router? nope
mainWindow.loadFile("index.html"); // showing error page
Aside: itâs so crazy to spend days and days and days on this so that, MY END GOAL, like my ideal goal here, is to replicate exactly what I already have working on my website, into a 100MB executable. Just so that it can go into Steam, because people look at Steam.
Serving URLs
After giving up (hopefully temporarily) on running âdirectlyâ with some kind of
yarn run electron .
, I tried using electron-build
, copying everything, and
loading the app.
Lo and behold, I get the stylized app showing me a special new error I hadnât seen.
Opening the console to see the full error:
Error: No route matches URL "/Users/max/repos/ttmh/frontend/electron-dist/mac-arm64/ttmh-frontend.app/Contents/Resources/app/index.html"
So⌠something with the router. I think. Thatâs react-router(-dom)
.
The internet said I must switch from BrowserRouter
to HashRouter
, which I donât want to do, because
- I want to change as little as possible
- Isnât Electron supposed to jUsT wOrK
- Iâm probably using it incorrectly in several places (I also know I definitely just straight up navigate to paths like
/this
sometimes, which is probably wrong) - react-routerâs docs say:
We strongly recommend you do not use HashRouter unless you absolutely have to.
I tried interrogating Claude01 on whether I really needed to do this. I mistrust both webdev StackOverflow and LLMs. It started telling me about Electronâs special implementation of the file://
protocol. And despite that Electronâs chrome(ium) conforming to the HTML(5?) history API (edit: or not?), because the app sends a request to the server for every page expecting to use http(s), the use of file://
screws things up (edit: even when not sending requests, I think it still wouldnât work. But that was also happening).
Wait⌠does my app actually send a request to the server for every route change?
Because vite doesnât log anything when you run the dev server, I never actually knew when something made a request to the frontend server. And because Cloudflare Pages always just worked, I never cared.
I fired up a python server in my dist/
directory
python -m http.server 4000
Home page: beautiful, everything looks exactly perfect.
Then I click on a single URL and I get a total and utter 404 from, I assume, pythonâs server:
I canât believe it. So the whole âlocal, client-side routing in react-router(-dom)â is actually doing server requests each time, reloading the same page, and then doing some kind of magic to display the correct one?
So wait, how does Cloudflare actually serve pages? Is it just sending index.html
every time to every route? It canât be. How would it know to do that?
Yes, itâs doing exactly that. Quoth the docs:
Single-page application (SPA) rendering
If your project does not include a top-level
404.html
file, Pages assumes that you are deploying a single-page application. This includes frameworks like React, Vue, and Angular. Pagesâ default single-page application behavior matches all incoming paths to the root (/
), allowing you to capture URLs like/about
or/help
and respond to them from within your SPA.
Well Iâll be.
So now, do I
- embed a server in my Electron app? (this seems wrong)
- make a custom protocol like Claude is telling me I could try (
app://
) and intercept stuff that way (scary?) - switch to the strongly unrecommended
HashRouter
?
Well, one interesting note is that if I visit index.html
on my local python server, I actually get
-
the same âNot Foundâ error page
-
with the console showing the same object (
Gp
â minified?) -
with the same structure
{
"status": 404,
"statusText": "Not Found",
"internal": true,
"data": "Error: No route matches URL '...'",
"error": {}
} -
with the same error message format
Error: No route matches URL "/index.html"
So hitting index.html
should never work in the first place. This might not change my problem at all, but somehow itâs comforting to hit the same react-router(-dom) error in electron as with my python server.
I think whatâs confusing me is the very basic first page in React Routerâs docs says:
Client side routing allows your app to update the URL from a link click without making another request for another document from the server. Instead, your app can immediately render some new UI and make data requests with fetch to update the page with new information.
So I must be terribly screwing this up.
OK but regardless, I can only even consider doing it properly if I get something displaying on the page. (Though if I can get something displaying on the page⌠maybe who cares about doing it properly.)
OK yeah of course I canât resist checking this
// current
<a href={nl.url} className="hover:underline">
{nl.display}
</a>
// how i'm supposed to be doing it
<Link to={nl.url}>{nl.display}</Link>
wow, this is⌠what. So both links, if I hover over them, show this in the bottom of the web page.
Using my python server because it (appropriately) refuses to serve anything but /
,
- clicking on the
<a>
produces the not found page - clicking on the
<Link>
loads the page. And instantly.
So I guess Iâve been missing out on all the react-router magic speed this entire time.
Now, Iâm still tempted to have different pages because I think itâs good for SEO or something. But wow, holy cow.
Iâm a bit nervous because I think some logic in my game breaks if I donât get to hard refresh the page sometimes (e.g., nav away from game, nav back, state is weird). So Iâm not sure I can entirely live without hard refreshes.
My question now is: how were the <a>
links working at all? React router must somehow be intercepting the same old index.html
reply my (ahem, Cloudflareâs) server is sending back the whole time, check the browser address bar, and then swap in the correct page without me even realizing I was doing it wrong.
OK, on second inspection, the local Link
is not quite instant. Though, vs requesting from a server, itâs probably still faster, right? But the other thing I noticed is thereâs some flashing happening. I recorded a video of switching between them and itâs just the empty page (w/ the circuit background) flashing between them; i.e., the absence of the content box. (Could I change this?)
Iâm too curiousâwhat is that <Link>
actually doing? Can I inspect it?
<a class="hover:underline" href="/changelog">Changelog</a>
You cannot be serious. Itâs just an <a>
???? Thereâs no JavaScript??? What is happening?!?!?!?!?!?
I think I need to go lie down. And then read the React Router docs again. Or not, and just keep googling how on earth itâs so hard for me to get my attempt at the most boring, typical setup to work.
Command-clicking a link to open in a new tab does hard-load the URL (as evinced by my python server still breaking in that case).
https://reactrouter.com/en/main/start/concepts#link tries to explain this, but I donât know how they intercept the browserâs default behavior only in the exact cases they want.
Update from future Max: I think it probably attached a normal old
onClick
handler and did anevent.preventDefault
, itâs just thatonClick
isnât rendered as an HTML attribute so I didnât see it. So itâs probably not that crazy. Good reminder to come back to stuff the next day when itâs past 7pm.
All right, checking this out next: https://github.com/daltonmenezes/electron-router-dom
I think it just uses a Hash Router. All the extra code is to handle multiple windows.
export function Router(routes: RouterProps) {
...
return (
<HashRouter basename={windowID}>
<Routes>{transformedRoutes[windowID]}</Routes>
</HashRouter>
)
}
The electron-vite project agrees with this too:
Electron does not handle (browser) history and works with the synchronized URL. So only hash router can work. ⌠For
react-router-dom
, you should useHashRouter
instead ofBrowserRouter
.
I harassed Claude, Googled, and read Electronâs docs for a while longer to try to figure out exactly what aspect of Electron makes it not support the normal browser history, but I couldnât understand it. So Iâm going to just give up and try out the Hash Router. But first I need to try to have multiple configs with Vite.
Multiple Vite Configs
It is nice that playing with related tech in your free time can help your main project.
The other day I was trying to get a new repo set up using Vite instead of Webpack because using Webpack makes me want to set my desk on fire. But it was one of my âexperimentâ repos where I want to have a single build config but many sets of inputâoutput. In other words, each experiment is fully separateâHTML, source TypeScript, the worksâso that I donât have to worry about refactoring as I learn new things and change everything. (I guess maybe they can share generated Tailwind CSS, thatâs fine.) But I donât want to set up a new build config or new repository each time.02
The solution to this was sitting right in front of me the whole time (âmulti-page appâ), but I spent enough time confused and reading about Vite and trying things that I feel a tiny bit more confident with, e.g., using multiple configs.
So Iâm going to try splitting vite.config.ts
into multiple options
vite.web.config.ts
vite.electron.config.ts
⌠and then passing the config on the command line.
Spent some time trying to figure out how to send different environment variables for web/electron. I think if I donât want to prefix them LIKE_THIS=here ...
in the build commands then itâs better to just have whole separate file sets. A bit ugly copy-pasting dev/prod configs like this, but probably better than these critical variables living in strings in the package.json scripts.
Then spent some time trying to name things. Specifically the axis of web vs electron. Many other terms were taken. I came up with âdeployment:â
name | owner | examples |
---|---|---|
mode | vite | development, production |
target | vite | modules, es2020, chrome87, firefox78 |
deployment | me | web, electron |
platform | electron-builder | win, mac, linux |
Refs:
- Vite: modes: https://vitejs.dev/guide/env-and-mode.html#modes
- Vite: build.target: https://vitejs.dev/config/build-options#build-target
- electron-builder: platform: https://www.electron.build/configuration/configuration#overridable-per-platform-options
Have started a deployment.tsx
file that contains the implementation differences between deployments. Starting with just the HashRouter
. Weâll see if this one file is enough, or whether something like code splitting will necessitate more over-engineering.
And⌠it works.
Wow, so switching to the HashRouter
just super worked. Canât login yet because thatâs going to be a whole thing, but pages using <Link>
can go to each other. All the vanilla webpages render.
Main glaring thing to fix is raw resource URLs, specifically images that start with /img/...
. A bunch of these are sent from the server, too, for the game. Going to hold off on fixing the home page ones because the electron app might not have the home page at all. Playing the game is going to be the next main thing.
But just before that, Iâd like to try out a super simple preload context bridge, inspired by the vite-electron-builder
repo. Exporting the node process versions, and as a stretch goal, trying IPC.
Actually, scratch that, the vite-electron-builder
has disabled the renderer sandbox. Iâm still fuzzy on the preload vs renderer context, but a comment in their code says âSandbox disabled because the demo of preload script depend on the Node.js api.â I have no reason to do this, so Iâll try some simple IPC instead.
Just kidding lol
Letâs try out IPC
https://www.electronjs.org/docs/latest/tutorial/ipc
Whatâs funny is the tutorial has one way of setting up IPC in the main process
function createWindow() {
// ...
ipcMain.on("set-title", (event, title) => {
const webContents = event.sender;
const win = BrowserWindow.fromWebContents(webContents);
win.setTitle(title);
});
}
⌠but then when they go to explain it, they immediately do it differently:
function handleSetTitle(event, title) {
const webContents = event.sender;
const win = BrowserWindow.fromWebContents(webContents);
win.setTitle(title);
}
app.whenReady().then(() => {
ipcMain.on("set-title", handleSetTitle);
// ...
});
So whatever, hoping that doesnât matter.
I was having trouble where the IPC thing wasnât being exposed in the renderer (window.electronAPI
was just undefined). It occurred to me that I have no idea how to debug stuff in the main (or preload?) code. The prospects in the guide look grim:
https://www.electronjs.org/docs/latest/tutorial/debugging-main-process
I guess you have to expose a port for the âV8 inspector protocolâ and then attach something to it that understands it?
Thankfully just specifying the directory of the preload script seemed to fix it:
{
// ...
preload: join(import.meta.dirname, "electron-preload.js");
}
⌠because at least now Iâm getting an error. But this error is whatâs baffling me, and why I turned back to this devlog.
Unable to load preload script: /Users/max/repos/ttmh/frontend/bin/mac-arm64/ttmh-frontend.app/Contents/Resources/app/electron-preload.mjs
SyntaxError: Cannot use import statement outside a module
at runPreloadScript (VM4 sandbox_bundle:2:83494)
at VM4 sandbox_bundle:2:83813
at VM4 sandbox_bundle:2:83968
at ___electron_webpack_init__ (VM4 sandbox_bundle:2:83972)
at VM4 sandbox_bundle:2:84095
The key here is Cannot use import statement outside a module
.
But I donât understand why it doesnât think itâs a module.
-
I have
"type": "module"
in my package.json -
I even tried renaming the preload script to
electron-preload.mjs
-
My main script is a module, which seems to be fine
-
The
vite-electron-builder
repo builds its preload script into a module that uses modern ES imports -
The closest thing Iâve found so far online is this GH issue comment, which describes all the things you canât ordinarily do in your preload file. But Iâm only importing exactly whatâs required:
import { contextBridge, ipcRenderer } from "electron";
-
Nevertheless, I did try turning on node integration, thinking maybe it needed that to allow modern (module)
import
syntax. That didnât fix it.
Do I make this one file a require()
style??
Thereâs a terrifying long (unresolved) thread in electron forge that doesnât not make me think that may be required: https://github.com/electron/forge/issues/2931
OMG that was it. I guess preload canât be an ES6 module in this context? The vite-electron-builder
template does change lots of default settings, like sandbox: false
, which may have been allowing import
to work there.
Updates from working with Greg
- electronâs IPC requires both calling and handling with different function names for sync vs async
- able to shim in reading from
localStorage
to get the JWT into the login page to send to server and try to get back session cookie - (re-remembering) chromeâs yellow colored failed cookie means that the server sent it back, but the browser refused to save it due to the cookie configuration
- this may be a mess w/ the client being local and on
file://
, whereas the serverâs cookie is expecting a bunch of stuff to be first party, specific domain, and secure - to refactor:
- could refactor all privileged
fetch
calls to pass as bearer auth, - ⌠or change this only on the electron side, and have server accept both
- iâm not sure the choice really matters. sure with a hidden cookie JS canât get it, but the user can always find it anyway, so iâm really only defending from malicious code
- could refactor all privileged
- this may be a mess w/ the client being local and on
Electron cookies: the good
cookies, in general, are cool, because the following works:
-
(Chrome) If you manually set the (first party, session) cookie, are you suddenly logged in?
- yes
-
(Electron) If I save a cookie, can I see it in dev tools?
- yes
-
(Electron) Will the (electron) browser then use it for requests?
- yes
If I set the following cookie (in Electronâs main process) and hit the production website, it all works and the game can be played (first time ever!):
const cookie: CookiesSetDetails = {
url: "https://talktomehuman.com",
name: "session_jwt",
value: "(REDACTED DUH)",
domain: "talktomehuman.com",
path: "/",
secure: true,
httpOnly: true,
// expirationDate: // TODO: maybe set (unix seconds), maybe OK (expires w/ session)
sameSite: "lax",
};
await session.defaultSession.cookies.set(cookie);
HoweverâŚ
Electron cookies: the bad
Summary:
- I donât think the domain (let alone protocol) can mismatch
- I donât think cookies work on
file://
at all
⌠the above has stringent requirements:
- the
domain
seems to have to match the URL exactly (or vice versa). Because usingapi.talktomehuman.com
as the domain results in the following error:
Error: Failed to set cookie - The cookie was set with an invalid Domain attribute.
- the above error is also triggered by trying to set the
url
field to anything remotely useful for a local context, likefile://
Some additional info found in a few places:
https://github.com/electron/electron/issues/4422#issuecomment-182618974
Seems to indicate URL/domain cookie matching guidelines (? rules?) which⌠weâd totally violate w/ file://
and on a different domain.
https://github.com/electron/electron/issues/27981
- cookies arenât supported on custom protocols
- Chromium does have a flag that would add
file:
to the âcookieable scheme list,â though itâs unclear to me whether that flag is modifiable from Electron- looks like probably not: âwe donât set things up such that file is allowedâ
- another comment from that issue where someone needed cookies to use a third-party SDK: âThe only other option I see is to run a local server, even in production, and server up the file via http, which would be less than ideal.â
- another comment had a couple interesting bits:
- tweaking CORS (I think on the backend) to prevent it from blocking requests using
file://
- intercepting the HTTP protocolârather than registering a new file protocolâin in order to get cookies to work. (this is kind of mind bending and i donât fully understand how this works)
- tweaking CORS (I think on the backend) to prevent it from blocking requests using
https://github.com/electron/electron/issues/22345#issuecomment-1057130188
Some code (and refined by the next two comments) that intercepts web requests, checks the headers, and modifies cookie parameters on the fly. Probably not necessary since I control the backend, too, but good to know such hackery is (or at least was, in 2020) possible.
https://stackoverflow.com/questions/50062959/electron-set-cookie
Electron: setting a cookie with file://
-
The first answer, by the author in 2018, registers a new scheme (
app://
) to overcome thefile://
limitation. But a comment from 2019 says this no longer works.The fact that it doesnât work checks out with what I learned above (GH issue from 2021 that cookies arenâ supported on custom protocols).(see third answer) -
The second answer is useless
-
The third answer was from the 2019 commenter, but they probably just changed some configs and ended using a custom protocol (
app://
) that presumably had cookies working for them. (Iâm not sure exactly when it stopped working.) -
The fourth answer (2023) is probably the most helpful. They outline two scenarios:
-
Hitting a remote URL (i.e., using the standard
http(s)://
scheme), you can set Cookies using Electronâs APIâat least, for that particular domain- This is true; I verified it in the section above
-
Hitting a local URL (i.e., using
file://
) and then trying to make afetch
call to a server, they had trouble trying to âset or retrieve cookies.â- However, from what I understand of Chrome / web browsers, it makes sense that you wouldnât be able to simply set cookies from HTTP headers in the normal flow, because (a) weâre on
file://
, (b) the origin is different - So for me, the interesting question is whether we could set them in a privileged way (e.g., in the main process) and then have the browser simply use themâsend them with requests. (Guessing no.)
- Their solution was to reduce security settings (enable node integration; disable context isolation), then use Nodeâs HTTP module to manually make
fetch
calls, constructing the HTTP headers w/ cookies by hand.- At this point, Iâd might as well just use bearer auth.
- However, from what I understand of Chrome / web browsers, it makes sense that you wouldnât be able to simply set cookies from HTTP headers in the normal flow, because (a) weâre on
-
https://developer.mozilla.org/en-US/docs/Web/WebDriver/Errors/InvalidCookieDomain
This is Mozillaâs documentation on WebDriver, so itâs not necessarily the same in Chromium or Electron. However, some concepts seems relevant:
-
it mentions documents being âcookie-averseâ if theyâre not served over
http://
,https://
, orftp://
- they specifically mention
file://
being cookie-averse - Chromiumâs list, IIRC, was:
http://
,https://
,ws://
,wss://
- they specifically mention
-
it also mentions problems with mismatched domains. For example:
example.com
â current domainexample.org
â trying to add a cookie for this domain fails
high level Q to electron discord
Ahh, discord and/or github discussions, where your problems go to die silently.
hey all, i have a very high-level question. do major electron apps like slack, discord, etc.:
- package their HTML files locally, and run over
file://
?- package their HTML files locally, and run a local webserver
(http://localhost:...
)?- package their HTML files locally, and use a custom protocol (e.g.,
app://
)?- just open a browser window to their remote websites and run over
https://
?- Something else?
For context, Iâve been working on method 1: trying to get a relatively simple app to run over
file://
. But I seem to constantly run into issues and rough edges, like routing and cookies. Iâm wondering whether anyone knows how the major apps run?
next steps
two main paths forward from where I stand:
-
send auth headers directly w/ bearer auth in
fetch()
calls- would the lack of cookies have issues w/ redirects and/or the loader logic in the router? i.e., with the â>
/login
redirects? - maybe not, because weâre always able to GET index.html, the only webpage on the frontend. I think all interesting calls must be manually
fetch()
d w/ cookie auth to backend - issues in
public/
still remain? how hard to address?
- would the lack of cookies have issues w/ redirects and/or the loader logic in the router? i.e., with the â>
-
abandon a local build entirely and use a local web server or browser window to the remote site
- potentially include a local landing page for the auth flow?
Iâm still tempted to try 1, probably because Iâm a fool. It might be worth sketching out what 2. would look like in more detail.
Investigating Slack
Slack is a legit Electron app. (One of the first?) Do they package all assets locally and run on file://
, or another protocol? Or are they just serving their website from an Electron shell?
SLACK_DEVELOPER_MENU=true open -a /Applications/Slack.app
some assets exist in bundle locally, but appears to load page and tons of assets from URL (the only document is from https://app.slack.com/client/XXXXXXXXX/YYYYYYYYYY
)
however, if you open without internet, it still works! even reloading is fine. however, force reloading displays a blank screen. but then getting back online doesnât fix it without a full reboot.
Note from investigating Service Workers later:
âIf you force-reload the page (shift-reload) it bypasses the service worker entirely. Itâll be uncontrolled. This feature is in the spec, so it works in other service-worker-supporting browsers.â
checking âDisable cacheâ and reloading keeps the layout. However the network tab indicates the document involves (was served by?) a âService Workerâ (web thing Iâve heard of but donât know anything about). And if you look into the details, it says
Source of response: ServiceWorker cache storage
⌠so it might be that the doc is retrieved dynamically by a service worker which has its own caching logic that circumvents the âDisable cacheâ check.
Other notes:
- a huge amount of Fetch/XHR are from âgantryâ â all âpixelâ and things named
track/
. kind of depressing. - 1 Doc, 3 CSS files, ~12 JS, 2 fonts (lato in 4 styles + slack icons), ~ 20 images, ~ 12 mp3s, 0 manifes / WS / Wasm, and ~50 JSON files (âOtherâ)
- JS all prefixed by âgantryâ or have it in the query stringâmaybe for ultimate tracking your tracker proxies all your assets?
- Turns out Gantry is their base library for all frontend things. So itâs handling many aspects of the code: bundling / splitting (probably managing webpack), caching (via Service Workers), fetching (also Service Workers relevant), and common internal tooling (tracking, profiling).
Looks like they may have written about this: https://slack.engineering/service-workers-at-slack-our-quest-for-faster-boot-times-and-offline-support/
âWhen you first boot the new version of Slack we fetch a full set of assets (HTML, JavaScript, CSS, fonts, and sounds) and place them in the Service Workerâs cache. We also take a copy of your in-memory Redux store and push it to IndexedDB. When you next boot we detect the existence of these caches; if theyâre present weâll use them to boot the app. If youâre online weâll fetch fresh data post-boot. If not, youâre still left with a usable client.â
So itâs literally a chrome shell w/ a service worker script (if that! honestly it is probably loaded too).
Also interesting:
Note that most binary assets (images, PDFs, videos, etc.) are handled by the browserâs cache (and controlled by normal cache headers). They donât need explicit handling by the Service Worker to load offline.
I still wonder why so many assets were bundled.
Service Workers are reinstalled every time their script changes. They update their service worker every time any asset changes as well:
âEvery time we update a relevant JavaScript, CSS, or HTML file it runs through a custom webpack plugin that produces a manifest of those files with unique hashes (hereâs a truncated example). This gets embedded into the Service Worker, triggering an update on the next boot even though the implementation itself hasnât changed.â
Itâs kind of interesting. It feels wrong. Like ultra heavy-handed cache invalidation.
More notes:
- They key everything by (build) timestamp and only request from one timestamp to ensure compatibility. (Canât tell whether this is trivial or interesting.)
- Their service worker then proxies literally all
fetch
requests, serving from the Cache if it exists or passing through.
On caching API responses:
"A common workflow at Slack is to release new features alongside corresponding changes in our APIs. Before Service Workers were introduced, we had a guarantee the two would be in sync, but with our one-version-behind cache assets our client was now more likely to be out of sync with the backend. To combat this, we cache not only assets but some API responses too.
The power of Service Workers handling every network request made the solution simple. With each Service Worker update we also make API requests and cache the responses in the same bucket as the assets. This ties our features and experiments to the right assetsâââpotentially out-of-date but sure to be in sync."
(emphasis theirs)
I donât fully understand. Need example.
One more interesting thing: whatâs embedded at the top of the page (<script>
and <style>
in <head>
). These are presumably meta observability stuff (profiling and tracking), bootstrapping, offline detection, and barebones skeleton setup.
-
script: primarily
haveAssetsLoaded()
andreloadIfIssue()
- utilities for
- timing loads and logging
- sending a beacon to servers if trouble loading
- showing trouble loading overlay over whole window (document)
- the âtrouble loadingâ detector is nice:
- show the trouble loading overlay
- set âonlineâ listener that simply reloads the page
haveAssetsLoaded()
: checking whether assets (CSS / JS) all loadedreloadIfIssue()
: checks assets, hooks up online listener, reload retry logic, CDN fallbacks, and clearing cache / service worker
- utilities for
-
style for trouble loading overlay
-
script: hooking up profiling for long tasks
-
script: dark mode w/ hardcoded CSS (probably to prevent a bright flash)
-
script: setup a global error handler that records all errors and sends them to slack
-
script: grab the CDN location
-
script: preload fonts by creating links to them and setting preload + prefetch
-
script: mysteriously set
jLen = 4
andcLen = 3
(IIRC these are lists of JS and CSS assets) -
(+10, 11) script: w/ profile, force load + inject 3 (
cLen
?) stylesheets (âslack kitâ, âbootâ, âhelperâ styles) into head
Scripts in body:
- translate the trouble loading error message
- download localization JSON files
- webpack shims and sets up enormous asset lists (json, css, img, animations, and probably all the JS too)
- (+ onwards) load manually specified JS files, I think these are the ones referred to by
jLen
Iâm not positive exactly where all the loading is kicked off, but I didnât search for it, and thereâs a lot of minified code.
Iâm not even sure whether they have an initial index.html
inside. app.asar
is 12 KB. It contains:
- a tiny index.js (which is the main script)
- this just points you to the
asar
file for your architecture (app-x64.asar
orapp-arm64.asar
) - I was thinking there ought to be more but this is probably the Mac-specific build
- this just points you to the
- the package.json, which is cool because it contains
- all their dependencies and dev dependencies
- all their build scripts
OK cool, letâs check out my app-arm64.asar
. It has
dist/
w/ code bundles, mp3 files, fonts, localization, iconsnode_modules
(w/ some 1-liner gems, likenumber-is-nan
andisarray
)- a nearly-identical
package.json
, but noindex.js
. The main is:./dist/boot.bundle.js
The code bundles in dist/
:
497.bundle.js
basic-auth-view.bundle.js
boot.bundle.js
child-preload-entry-point.bundle.js
component-preload-entry-point.bundle.js
magic-login-preload-entry-point.bundle.js
main.bundle.js
net-log-window.bundle.js
preload.bundle.js
renderer.vendor.bundle.js
settings-editor.bundle.js
And there are 3 HTML files:
basic-auth-view.html
net-log-window.html
settings-editor.html
Tracing code path:
boot.bundle.js
(obfuscated)- seems like utils
- one interesting thing, I do see
slack://
; so thatâs one potential protocol - references
./main.bundle.js
main.bundle.js
(obfuscated)- does reference
app://resources/${o}.html
, which is interesting because- another protocol?
- what goes in there appears to be top-level windows that are created: the âabout slackâ box, a âbasic auth viewâ, the main window (I think), a log viewing window, and a settings editor
- thereâs no HTML files in
resources/
. probably downloaded. or elsewhere? - since itâs
app://resources/
, the code is handling where to find it, so it could be saved anywhere
- I also see
app://resources/notifications.html
- I see what looks like the electron window creation section, which seems to specify:
- preload: the
component-preload-entry-point.bundle.js
script - URL: I think itâs the
app://resources/${o}.html
.toString()o
is inspected and used to set thewindowOptions
argument
- preload: the
- does reference
Not other concrete pointers to HTML files, so I think itâs time to check out the ones we have.
Based on the names, basic-auth-view
is the best candidate for an opening view. It has an empty HTML <body>
tag. Of note:
- a very lengthy
<meta>
w/ aContent-Security-Policy
including a bunch of slack domains - loads
renderer.vendor.bundle.js
(why âvendorâ?) - loads
basic-auth-view.bundle.js
renderer.vendor.bundle.js
- very short
- seems to setup handful of react functions
- bunch more utils that look very react-y (
componentWillMount
) - ok this might be either React or a barebones subset. âvendorâ would make sense
basic-auth-view.bundle.js
- sets up Sentry
- sets up font styles
- sets up CSS
- buncha little utils (I think from Sentry)
- react v16
- redux
onecouplefewbuncha things copyrighted by microsoft- yo why isnât all this âvendorâ lol
- nice big sentry error reporting message about running in the wrong Electron process & possible causes and solutions (not sure whether Slack or Sentry wrote this)
- only at the bottom, some code about setting a username / password for a proxy or server
Might end my exploration now, because thereâs more stuff (e.g., searching âmagic linkâ) reveals a bunch more in component-preload-entry-point.bundle.js
(and the main bundle, and the magic link bundle).
One other fruitful exploration might be the ipcRenderer
(occurs in a bunch of places)
Overall, this was very educational. My takeaway re: loading is:
- slack definitely loads a huge amount of stuff from their servers
- the service workers update the app in the background as you run it (i.e., you never update the electron app itself) by downloading and caching new versions of all the assets
Interestingly, I still donât have a super solid understanding of how we get
- from whatâs downloaded in electron (all the JS bundles above)
- to the working single main HTML page
The working single main HTML page is gotten from a Service Worker on load. On force reload (âforce cold bootâ in their terms), it is retrieved from https://app.slack.com/client/XXX/YYY
. The enormous mess of JS is server (edge) loaded, then Service Worker-loaded, too.
Is the electron stuff (main and preload) able to be updated with the Service Worker? Probably not, I would guess. I wonder if it can self-update, or if it actually downloads new app versions.
Remaining questions:
-
is all the downloaded JS bundles from the ASAR actually used during normal app running? or is it deprecated as soon as we have a cache downloaded from the server?
- my thinking right nowâbarring understanding any electron updatesâis:
- the rendererâs assets (HTML, CSS, JS) are Service Worker-managed, heavily cached, etc. And never sent to client
- but the main / preload (electron-specific) assets are not
- my thinking right nowâbarring understanding any electron updatesâis:
-
does the initial load hit
file://
? (and are we usingapp://
,slack://
,https://
for other stuff?)- the page assets all seem to come from
https://
directly
- the page assets all seem to come from
Investigating file://
usage in the ASAR-bundled code:
-
component-preload-entry-point.bundle.js
,main.bundle.js
, andpreload.bundle.js
all have this gem:.replace(new RegExp(`(file://)?/*${r}/*`,"ig"),"app:///")
This may be in Sentryâs code, though.
-
the other instances I see are error reporting and polyfills
-
hard to see it directly loading anything
On the other hand, we do see (main.bundle.js
)
protocol.registerSchemesAsPrivileged()
for- âsentry-ipcâ
- Iâm pretty sure âappâ
protocol.handle("app", ...)
Some interesting bits found around the protocol handling (in main.bundle.js
):
- if the protocol isnât âappâ, error: invalid protocol
- (if url.includes(â..â)) âTraversing paths upwards not allowed in app:// protocol.â
- (if host !== âresourcesâ) âHost must be resources for app:// protocol.â
- (if (!w.pathname || w.pathname === â/â)): âMissing path in app:// protocol.â
- if pathExists(âŚ), fetch, or error: âRequested file not found.â
- finally, âunexpected error in app:// protocol handler.â (âapp://â hardcoded in string)
Itâs still unclear to me whether file://
is used at all, but it certainly seems app://
is used. I donât recall so far whether a custom protocol has any benefit over file://
, though it might. Minimally, wrapping allows intercepting all requests yourself, so I could see that alone as a justification.
If I had to guess, I would say that file://
is not used.
- The only code section that checks for it might be from a third party; it has a github link and code that checks for FF / Safari nearby.
- The rest of the code seems to try to regex and/or substring it out.
So Iâd guess they register app://
early on, load the initial auth views using it, and then redirect to the full https://
view. Iâm not positive that I need to do this rather than use file://
, but itâs helpful to see.
Service Workers
(NB: chronologically done in mid section of Slack investigation above)
Checking out https://developer.chrome.com/docs/workbox/service-worker-overview
âService workers are specialized JavaScript assets that act as proxies between web browsers and web servers. They aim to improve reliability by providing offline access, as well as boost page performance.â
The term âassetâ freaks me out.
Visiting chrome://serviceworker-internals/
, I have 545 of them. 3 are running on that internal page, all chrome extensions. The rest are stopped.03
âAn indispensable aspect of service worker technology is the Cache interface, which is a caching mechanism wholly separate from the HTTP cache.â
and, from MDN:
âThe Cache interface provides a persistent storage mechanism for Request / Response object pairs that are cached in long lived memory.â
So this is all checking out.
Service workers can intercept network requests. Since the Cache is for (Request, Response) pairs, this makes sense. Itâs pretty interesting though.
Some more notes:
- scoping is based on subdirs; as such, usually install in root of domain
- progressive enhancement is the idea; page should work without it, but (future visits) can be sped up with it
- browser automatically does byte-by-byte comparison and updates to new version if script changes (I think). donât change the script name on update.
- service workers have detailed lifecycle w/ installing, activating, waiting, running. it seems to take multiple page views for it to run (can accelerate on refreshes w/ dev tools)
- run on own thread, like web workers
- two kinds of caching: pre-caching (grab what you might want) and runtime caching (intercept network requests)
Methods
sync | async | |
---|---|---|
preload | send() |
invoke() |
main | on() |
handle() |
Footnotes
All the gpt-4s have gotten stupider these days, to my dismay. âŠď¸
After writing this out, I realize a new vite config for each experiment probably would have been fine⌠though I guess running them all would have been annoying, maybe? âŠď¸
Aside, itâs always a bit annoying that chrome extensions are identified as gibberish
chrome-extension://schlabadadobabambado
. Each one has their URL listed 4 times and zero mention of its name. âŠď¸