Entering the second week of Recurse. Besides orientation and a few adventures in pair programming (special shout out to Jane Adams for trying out Irydium with me!), I spent most of my time attempting to get document saving & loading working with Irydium.
I learned from Iodide that not having a good document sharing story really inhibits collaboration and sharing, which is something I explicitly want to do here at the Recurse centre (and in general for this project). That said, this isn’t actually an area I want to spend a lot of time on right now: it’s the shape of problem I’ve solved many times before (and that has been solved by many others). I’d rather spend my time over the next few weeks on things I haven’t had much of a chance to look at or pursue in my day-to-day.
So, to try to keep the complexity down, I decided to take the same approach as the svelte repl, which aims only to allow the reproduction of simple examples. It allows you to save anything you type in it and also browse anything that you had previously saved. That’s not going to replace GitHub, but it’s more than enough to get started.
Supabase
So with that goal in mind, how to do go about it? If I wanted to completely fall back on my previous knowledge, I could have gone for the tried + true approach of Django / Heroku to add a persistence layer (what I did for Iodide). That would have had the benefit of being familiar but would also have increased the overall implementation complexity of Irydium considerably. In the past year, I’ve become convinced that serverless approaches to building web applications are the wave of the future, at least for applications like this one. They’re easier to set up, easier to develop, and (generally speaking) cheaper to deploy. Just before I launched, I set up irydium.dev as a static site on Netlify and it’s been a great experience: deploys are super fast and it’s easy to reason about what’s going on “under the hood” (since there’s not a much of a hood to look under).
With that in mind, I decided to take a (small) gamble and give Supabase a try for this one after determining it would be compatible with the approach I wanted to take. Supabase bills itself as a “Firebase Alternative” (Firebase is another popular solution for bootstrapping simple web applications with persistence). In contrast to Firebase, Supabase uses a standard database technologies (Postgres!) and has a nice JavaScript SDK and a bunch of well-written tutorials (including one especially for Svelte).
The naive model for integrating with Supabase is pretty simple:
Set up a Supabase application, which provides you with a unique API endpoint to make web requests (this endpoint can be exposed publicly).
Have your client authenticate with an OAuth provider (e.g. GitHub, GitLab), then store an authentication token in localStorage.
You can then make requests to the above endpoint with the authentication token, which lets Supabase use row-level security to restrict modifications to the database: in this case, we can restrict users to updating their own documents.
I’d say it probably took me 20–30 hours to get the feature working end-to-end (including documentation), which wasn’t too bad. My impressions were pretty positive: the aforementioned tutorial is pretty decent, the supabase-js library provides a nice ORM-like abstraction over SQL and integrates nicely with Svelte. In general working with Supabase felt pretty familiar to me from previous experiences writing database-backed applications, which I take as a very good sign.
The part that felt the weirdest was writing raw SQL to set up the “documents” table that Irydium uses: SQL is something I’m fairly used to writing because of my experiences at Mozilla, but I imagine this might be off-putting to someone newer to writing these types of things. Also, I have some concerns of how maintainable a Supabase database is over the long term: while it was easy enough to document the currently-simple setup instructions in the README, I do somewhat fear the prospect of managing my database via their SQL console. Something like Django’s schema migrations and management commands would be a welcome addition to Supabase’s SDK.
Netlify functions
The above approach isn’t what most people would consider to be “best practice”1. In particular, storing credentials in localStorage is probably not the best idea for an application presenting interactive content like Irydium: it wouldn’t be particularly difficult for a malicious document to steal someone’s secret and send it somewhere it shouldn’t be.
I’m not so worried about it at this stage of the project, but one intriguing possibility here (that’s compatible with our current deploy set up) would be to write some simple Netlify Functions to do the actual interaction with Supabase, while delegating to Netlify for the authentication itself (using Netlify Identity).
I experimented writing a simple function to prove out this approach and it seems to work quite well (source, example). This particular function is making an anonymous query to the database, but I see no obstacle to handling authenticated ones as well. Having an API under a .netlify namespace seems kinda weird on first blush, but I can probably get used to it.
I want to move on to other things now (parsers! document state visualizations!) but might poke at this more later. In the mean time, if you write/build something cool at irydium.dev/repl, let me know!
So it’s my first day at the Recurse centre, which I blogged briefly about last week. I thought I’d start out by going into a bit more detail about what I’m trying to do with Irydium. This post might be a bit discursive and some of my thoughts are only half-formed: my intent here is towards trying to express some of these ideas at all rather than to come up with the perfect formulation for them, which is going to take time. It is based partly on a presentation I gave at Mozilla last Friday (just before going on my 6-week leave, which starts today).
First principles
The premise of Irydium is that despite obvious advances in terms of the ability of computers to crunch numbers and analyze data, our ability to share whatever we learn from these understandings is still far too difficult, especially for people new to the field. Even for domain experts (those with the job title “Data Engineer” or “Data Scientist” or similar) this is still more difficult than one would like.
I’ve made a few observations over the past couple years of trying to explain and document Mozilla’s data platform that I think form a good starting point for trying to close the gap:
Text is pretty great. Writing, just plain text, is (in my opinion) the single best medium for giving context to data. In terms of raw information density and ability to communicate complex ideas, nothing beats it. If you haven’t read it before, the essay always bet on text (by Graydon Hoare, creator of Rust) is well worth reading.
Markdown is pretty great too. Essentially an easy-to-write superset of HTML, it’s become the medium of choice for many desktop publishing workflows and has become the basis for many efforts in the “interactive presentation” space that I’m most interested in.
Reactive Systems make Data Exposition Exposition Easier. A reactive abstraction in front of your computational model reduces development times, makes your work more reproducible and is often easier for less-experienced people to understand. I’d cite the long-standing success of Excel and the recent interest in projects like Observable as evidence for this.
Ok, so what is Irydium?
Irydium is, at heart, a way to translate markdown documents into an interactive, compelling visual presentation.
My view is that publishing markdown text on the web is very close to a solved problem, and that we should build on that success rather than invent something new. This is not necessarily a new point of view (e.g. Rmarkdown and JupyterBook have similar premises) but I think some aspects of Irydium’s approach are mildly novel (or at least within the space of “not generally accepted ideas”).
If you want to get a bit of a flavor for how it works, visit the demonstration site (irydium.dev) and play with some of the examples.
What makes Irydium different from <X>?
While there are a bunch of related projects in this space, there’s a few design principles about Irydium that make it a little different from most of what’s already out there1:
Reactive: Irydium is reactive in the same way that a spreadsheet is — that is, any individual change you make will immediately flow to the rest of the system. This provides a more intuitive model for the creator of the document and also makes it easier to create truly interactive visualizations.
Idempotent: in Irydium, a source document will yield the same presentation every time it’s run. There’s no need to reason about what the state of the “kernel” is. This is a highly valuable property when thinking about how to make your analyses reproducible.
Familiar: Irydium uses as few novel concepts and technologies as possible: it builds on some of the best ideas and technologies produced by the open source community: Python, pyodide, Svelte, mdsvex, MyST and a few others — chosen for having a reasonably shallow learning curve.
Hackable: While I’m working on an online environment to build and share irydium documents, it’s also fully possible to do so using the tools you know and love like Visual Studio Code.
Related projects
With the above caveats, there are still a number of projects that overlap with Irydium’s ideas and/or design goals. A few that seem worth mentioning here:
Iodide: This is the obvious one, at least for those who have been following my work for a while. Iodide was an experiment in making a “web native” version of a scientific notebook: it uses the cell-based computational model that will be familiar to anyone who’s used Jupyter, but all the computation happens on the client. It is probably most famous for launching pyodide, a port of Python to WebAssembly (that Irydium now uses to support Python). I feel like it has a number of design issues (some of which I’ve blogged about previously) and is not currently in active development.
Observable: Client-side reactive notebooks, commercial backing, broadly used in the D3 community. Shares Irydium’s reactive approach, departs from it in terms of using a custom file format and emphasizing their interactive editing and collaboration environment (which is indeed quite impressive). I’ve used Observable for a few small work things (example) and while there’s a lot I like about it, I am a bit non-plussed by how many wheels it reinvents and the implicit lock-in to a single vendor.2
Starboard: Similar in some ways to Iodide, but in active development. I’ve started chatting a bit with the core developers on whether there might be areas we could collaborate.
Ellx: I found out a bit about this relatively recently, via the Svelte discord. Actually very close in some ways to Irydium in terms of choices of technology (e.g. Svelte). Again, in initial chats with the core developers on possible collaborations.
Success criteria
My intent with Irydium, at this point in its development, is to prove out some concepts and see where they lead. While I’d welcome it if Irydium became a successful, widely adopted environment for building interactive data visualizations, I’d also be totally happy with other outcomes, such as:
Providing a source of ideas and/or code for other people.
Working on (or with) Irydium being a good learning experience both for myself and others
Please don’t conflate “unique” with “superior”: I’m well aware that all designs come with trade offs. In particular, Irydium’s approach will almost certainly make it difficult / impossible to directly interact with “big data” systems in an efficient way. ↩
There is at least one effort (Dataflow) to allow editing Observable documents without using Observable itself, which is interesting. ↩
Approaching my 10-year moz-iversary in July, I’ve decided it’s time to take a bit of a mini-sabbatical: I’ll be out (and trying as hard as possible not to check bugmail) from Friday, June 25th until August 9th. During this time, I’ll be doing a batch at the Recurse Centre (something like a writer’s retreat for programmers), exploring some of my interests around data visualization and analysis that don’t quite fit into my role as a Data Engineer here at Mozilla.
In particular, I’m planning to work a bunch on a project tentatively called “Irydium”, which pursues some of the ideas I sketched out last year in my Iodide retrospective and a few more besides. I’ve been steadily working on it in my off hours, but it’s become clear that some of the things I want to pursue would benefit from more dedicated attention and the broader perspective that I’m hoping the Recurse community will be able to provide.
I had meant to write up a proper blog post to announce the project before I left, but it looks like I’m pretty much out of time. Instead, I’ll just offer up the examples on the newly-minted irydium.dev and invite people to contact me if any of the ideas on the site sounds interesting. I’m hoping to blog a whole bunch while I’m there, but probably not under the Mozilla tag. Feel free to add wrla.ch to your RSS feed if you want to follow what I’m up to!
Lots of progress on the Glean Dictionary since I made the initial release announcement a couple of months ago. For those coming in late, the Glean Dictionary is intended to be a data dictionary for applications built using the Glean SDK and Glean.js. This currently includes Firefox for Android and Firefox iOS, as well as newer initiatives like Rally. Desktop Firefox will use Glean in the future, see Firefox on Glean (FoG).
Production URL
We’re in production! You can now access the Glean Dictionary at dictionary.telemetry.mozilla.org. The old protosaur-based URL will redirect.
Glean Dictionary + Looker = ❤️
At the end of last year, Mozilla chose Looker as our internal business intelligence tool. Frank Bertsch, Daniel Thorn, Anthony Miyaguchi and others have been building out first class support for Glean applications inside this platform, and we’re starting to see these efforts bear fruit. Looker’s explores are far easier to use for basic data questions, opening up data based inquiry to a much larger cross section of Mozilla.
I recorded a quick example of this integration here:
Note that Looker access is restricted to Mozilla employees and NDA’d volunteers. Stay tuned for more public data to be indexed inside the Glean Dictionary in the future.
Glean annotations!
I did up the first cut of a GitHub-based system for adding annotations to metrics — acting as a knowledge base for things data scientists and others have discovered about Glean Telemetry in the field. This can be invaluable when doing new analysis. A good example of this is the annotation added for the opened as default browser metric for Firefox for iOS, which has several gotchas:
Many thanks to Krupa Raj and Leif Oines for producing the requirements which led up to this implementation, as well as their evangelism of this work more generally inside Mozilla. Last month, Leif and I did a presentation about this at Data Club, which has been syndicated onto YouTube:
Since then, we’ve had a very successful working session with some people Data Science and have started to fill out an initial set of annotations. You can see the progress in the glean-annotations repository.
Other Improvements
Lots more miscellaneous improvements and fixes have gone into the Glean Dictionary in the last several months: see our releases for a full list. One thing that irrationally pleases me are the new labels Linh Nguyen added last week: colorful and lively, they make it easy to see when a Glean Metric is coming from a library:
Future work
The Glean Dictionary is just getting started! In the next couple of weeks, we’re hoping to:
Expand the Looker integration outlined above, as our deploy takes more shape.
Work on adding “feature” classification to the Glean Dictionary, to make it easier for product managers and other non-engineering types to quickly find the metrics and other information they need without needing to fully understand what’s in the source tree.
Continue to refine the user interface of the Glean Dictionary as we get more feedback from people using it across Mozilla.
If you’re interested in getting involved, join us! The Glean Dictionary is developed in the open using cutting edge front-end technologies like Svelte. Our conviction is that being transparent about the data Mozilla collects helps us build trust with our users and the community. We’re a friendly group and hang out on the #glean-dictionary channel on Matrix.
Just wanted to give some quick updates on the state of mozregression.
Anti-virus false positives
One of the persistent issues with mozregression is that it seems to be persistently detected as a virus by many popular anti-virus scanners. The causes for this are somewhat complex, but at root the problem is that mozregression requires fairly broad permissions to do the things it needs to do (install and run copies of Firefox) and thus its behavior is hard to distinguish from a piece of software doing something malicious.
Recently there have been a number of mitigations which seem to be improving this situation:
:bryce has been submitting copies of mozregression to Microsoft so that Windows Defender (probably the most popular anti-virus software on this platform) doesn’t flag it.
It’s tempting to lament the fact that this is happening, but in a way I can understand it’s hard to reliably detect what kind of software is legitimate and what isn’t. I take the responsibility for distributing this kind of software seriously, and have pretty strict limits on who has access to the mozregression GitHub repository and what pull requests I’ll merge.
CI ported to GitHub Actions
Due to changes in Travis’s policies, we needed to migrate continuous integration for mozregression to GitHub actions. You can see the gory details in bug 1686039. One possibly interesting wrinkle to others: due to Mozilla’s security policy, we can’t use (most) external actions inside our GitHub repository. I thus rewrote the logic for uploading a mozregression release to GitHub for MacOS and Linux GUI builds (Windows builds are still happening via AppVeyor for now) from scratch. Feel free to check the above out if you have a similar need.
MacOS Big Sur
As of version 4.0.17, the mozregression GUI now works on MacOS Big Sur. It is safe to ask community members to install and use it on this platform (though note the caveats due to the bundle being unsigned).
Usage Dashboard
Fulfilling a promise I implied last year, I created a public dataset for mozregression and created an dashboard tracking mozregression use using Observable. There are a few interesting insights and trends there that can be gleaned from our telemetry. I’d be curious if the community can find any more!
House keeping news: I’m moving this blog back to the wrla.ch domain from wlach.github.io. This domain sorta kinda worked before (I set up a netlify deploy a couple years ago), but the software used to generate this blog referenced github all over the place in its output, so it didn’t really work as you’d expect. Anyway, this will be the last entry published on wlach.github.io: my plan is to turn that domain into a set of redirects in the future.
I don’t know how many of you are out there who still use RSS, but if you do, please update your feeds. I have filed a bug to update my Planet Mozilla entry, so hopefully the change there will be seamless.
Why? Recent events have made me not want to tie my public web presence to a particular company (especially a larger one, like Microsoft). I don’t have any immediate plans to move this blog off of github, but this gives me that option in the future. For those wondering, the original rationale for moving to github is in this post. Looking back, the idea of moving away from a VPS and WordPress made sense, the move away from my own domain less so. I think it may have been harder to set up static hosting (esp. with HTTPS) at that time… or I might have just been ignorant.
In related news, I decided to reactivate my twitter account: you can once again find me there as @wrlach (my old username got taken in my absence). I’m not totally thrilled about this (I basically stand by what I wrote a few years ago, except maybe the concession I made to Facebook being “ok”), but Twitter seems to be where my industry peers are. As someone who doesn’t have a large organic following, I’ve come to really value forums where I can share my work. That said, I’m going to be very selective about what I engage with on that site: I appreciate your understanding.
It’s coming up on ten years at Mozilla for me, by far the longest I’ve held any job personally and exceedingly long by the standards of the technology industry. When I joined up in Summer 2011 to work on Engineering Productivity1, I really did see it as a dream job: I’d be paid to work full time on free software, which (along with open data and governance) I genuinely saw as one of the best hopes for the future. I was somewhat less sure about Mozilla’s “mission”: the notion of protecting the “open web” felt nebulous and ill-defined and the Mozilla Manifesto seemed vague when it departed from the practical aspects of shipping a useful open source product to users.
It seems ridiculously naive in retrospect, but I can remember thinking at the time that the right amount of “open source” would solve all the problems. What can I say? It was the era of the Arab Spring, WikiLeaks had not yet become a scandal, Google still felt like something of a benevolent upstart, even Facebook’s mission of “making the world more connected” sounded great to me at the time. If we could just push more things out in the open, then the right solutions would become apparent and fixing the structural problems society was facing would become easy!
What a difference a decade makes. The events of the last few years have demonstrated (conclusively, in my view) that open systems aren’t necessarily a protector against abuse by governments, technology monopolies and ill-intentioned groups of individuals alike. Amazon, Google and Facebook are (still) some of the top contributors to key pieces of open source infrastructure but it’s now beyond any doubt that they’re also responsible for amplifying a very large share of the problems global society is experiencing.
At the same time, some of the darker sides of open source software development have become harder and harder to ignore. In particular:
Harassment and micro aggressions inside open source communities is rampant: aggressive behaviour in issue trackers, personal attacks on discussion forums, the list goes on. Women and non-binary people are disproportionately affected, although this behaviour exacts a psychological toll on everyone.
Open source software as exploitation: I’ve worked with lots of contributors while at Mozilla. It’s hard to estimate this accurately, but based on some back-of-the-envelope calculations, I’d estimate that the efforts of community volunteers on projects I’ve been involved in have added up to (conservatively) to hundreds of thousands of U.S. dollars in labour which has never been directly compensated monetarily. Based on this experience (as well as what I’ve observed elsewhere), I’d argue that Mozilla as a whole could not actually survive on a sustained basis without unpaid work, which (at least on its face) seems highly problematic and creates a lingering feeling of guilt given how much I’ve benefited financially from my time here.
It’s a road to burnout. Properly managing and nurturing an open source community is deeply complex work, involving a sustained amount of both attention and emotional labour — this is difficult glue work that is not always recognized or supported by peers or management. Many of the people I’ve met over the years (community volunteers and Mozilla employees alike) have ended up feeling like it just isn’t worth the effort and have either stopped doing it or have outright left Mozilla. If it weren’t for an intensive meditation practice which I established around the time I started working here, I suspect I would have been in this category by now.
All this has led to a personal crisis of faith. Do openness and transparency inherently lead to bad outcomes? Should I continue to advocate for it in my position? As I mentioned above, the opportunity to work in the open with the community is the main thing that brought me to Mozilla— if I can’t find a way of incorporating this viewpoint into my work, what am I even doing here?
Trying to answer these questions, I went back to the manifesto that I just skimmed over in my early days. Besides openness — what are Mozilla’s values, really, and do I identify with them? Immediately I was struck by how much it felt like it was written explicitly for the present moment (even aside from the addendums which were added in 2018). Many points seem to confront problems we’re grappling with now which I was only beginning to perceive ten years ago.
Beyond that, there was also something that resonated with me on a deeper level. There were a few points, highlighted in bold, that really stood out:
The internet is an integral part of modern life—a key component in education, communication, collaboration, business, entertainment and society as a whole.
The internet is a global public resource that must remain open and accessible.
The internet must enrich the lives of individual human beings.
Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.
Individuals must have the ability to shape the internet and their own experiences on the internet.
The effectiveness of the internet as a public resource depends upon interoperability (protocols, data formats, content), innovation and decentralized participation worldwide.
Free and open source software promotes the development of the internet as a public resource.
Transparent community-based processes promote participation, accountability and trust.
Commercial involvement in the development of the internet brings many benefits; a balance between commercial profit and public benefit is critical.
Magnifying the public benefit aspects of the internet is an important goal, worthy of time, attention and commitment.
I think it’s worth digging beneath the surface of these points: what is the underlying value system behind them? I’d argue it’s this, simply put: human beings really do matter. They’re not just line items in a spreadsheet or some other resource to be optimized. They are an end in of themselves. People (more so than a software development methodology) are the reason why I show up every day to do the work that I do. This is really an absolute which has enduring weight: it’s a foundational truth of every major world religion to say nothing of modern social democracy.
What does working and building in then open mean then? As we’ve seen above, it certainly isn’t something I’d consider “good” all by itself. Instead, I’d suggest it’s a strategy which (if we’re going to follow it) should come out of that underlying recognition of the worth of Mozilla’s employees, community members, and users. Every single one of these people matter, deeply. I’d argue then, that Mozilla should consider the following factors in terms of how we work in the open:
Are our spaces2 generally safe for people of all backgrounds to be their authentic selves? This not only means free from sexual harassment and racial discrimination, but also that they’re able to work to their full potential. This means creating opportunities for everyone to climb the contribution curve, among other things.
We need to be more honest and direct about the economic benefits that community members bring to Mozilla. I’m not sure exactly what this means right now (and of course Mozilla’s options are constrained both legally and economically), but we need to do better about acknowledging their contributions to Mozilla’s bottom line and making sure there is a fair exchange of value on both sides. At the very minimum, we need to make sure that people’s contributions help them grow professionally or otherwise if we can’t guarantee monetary compensation for their efforts.
We need to acknowledge the efforts that our employees make in creating functional communities. This work does not come for free and we need to start acknowledging it in both our career development paths and when looking at individual performance. Similarly, we need to provide better guidance and mentorship on how to do this work in a way that does not extract too hard a personal toll on the people involved — this is a complex topic, but a lot of it in my opinion comes down to better onboarding practices (which is something we should be doing anyway) as well as setting better boundaries (both in terms of work/life balance, as well as what you’ll accept in your interactions).
Finally, what is the end result of our work? Do the software and systems we build genuinely enrich people’s lives? Do they become better informed after using our software? Do they make them better decisions? Free software might be good in itself, but one must also factor in how it is used when measuring its social utility (see: Facebook).
None of the above is easy to address. But the alternatives are either close everything down to public participation (which I’d argue will lead to the death of Mozilla as an organization: it just doesn’t have the resources to compete in the marketplace without the backing of the community) or continue down the present path (which I don’t think is sustainable either). The last ten years have shown that the “open source on auto-pilot” approach just doesn’t work.
I suspect these problems aren’t specific to Mozilla and affect other communities that work in the open. I’d be interested in hearing other perspectives on this family of problems: if you have anything to add, my contact information is below.
This includes our internal communications channels like our Matrix instance as well as issue trackers like Bugzilla. There’s also a question of what to do about non-Mozilla channels, like Twitter or the Orange Site. Although not Mozilla spaces, these places are often vectors for harassment of community members. I don’t have any good answers for what to do about this, aside from offering my solidarity and support to those suffering abuse on these channels. Disagreement with Mozilla’s strategy or policy is one thing, but personal attacks, harassment, and character assasination is never ok— no matter where it’s happening. ↩
(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)
On behalf of Mozilla’s Data group, I’m happy to announce the availability of the first milestone of the Glean Dictionary, a project to provide a comprehensive “data dictionary” of the data Mozilla collects inside its products and how it makes use of it. You can access it via this development URL:
The goal of this first milestone was to provide an equivalent to the popular “probe” dictionary for newer applications which use the Glean SDK, such as Firefox for Android. As Firefox on Glean (FoG) comes together, this will also serve as an index of what data is available for Firefox and how to access it.
Part of the vision of this project is to act as a showcase for Mozilla’s practices around lean data and data governance: you’ll note that every metric and ping in the Glean Dictionary has a data review associated with it — giving the general public a window into what we’re collecting and why.
In addition to displaying a browsable inventory of the low-level metrics which these applications collect, the Glean Dictionary also provides:
Code search functionality (via Searchfox) to see where any given data collection is defined and used.
Information on how this information is represented inside Mozilla’s BigQuery data store.
Over the next few months, we’ll be expanding the Glean Dictionary to include derived datasets and dashboards / reports built using this data, as well as allow users to add their own annotations on metric behaviour via a GitHub-based documentation system. For more information, see the project proposal.
The Glean Dictionary is the result of the efforts of many contributors, both inside and outside Mozilla Data. Special shout-out to Linh Nguyen, who has been moving mountains inside the codebase as part of an Outreachy internship with us. We welcome your feedback and involvement! For more information, see our project repository and Matrix channel (#glean-dictionary on chat.mozilla.org).
A bit belatedly, I thought I’d write a short retrospective on Iodide: an effort to create a compelling, client-centred scientific notebook environment.
Despite not writing a ton about it (the sole exception being this brief essay about my conservative choice for the server backend) Iodide took up a very large chunk of my mental energy from late 2018 through 2019. It was also essentially my only attempt at working on something that was on its way to being an actual product while at Mozilla: while I’ve led other projects that have been interesting and/or impactful in my 9 odd-years here (mozregression and perfherder being the biggest successes), they fall firmly into the “internal tools supporting another product” category.
At this point it’s probably safe to say that the project has wound down: no one is being paid to work on Iodide and it’s essentially in extreme maintenance mode. Before it’s put to bed altogether, I’d like to write a few notes about its approach, where I think it had big advantanges, and where it seems to fall short. I’ll conclude with some areas I’d like to explore (or would like to see others explore!). I’d like to emphasize that this is my opinion only: the rest of the Iodide core team no doubt have their own thoughts. That said, let’s jump in.
What is Iodide, anyway?
One thing that’s interesting about a product like Iodide is that people tend to project their hopes and dreams onto it: if you asked any given member of the Iodide team to give a 200 word description of the project, you’d probably get a slightly different answer emphasizing different things. That said, a fair initial approximation of the project vision would be “a scientific notebook like Jupyter, but running in the browser”.
What does this mean? First, let’s talk about what Jupyter does: at heart, it’s basically a Python “kernel” (read: interpreter), fronted by a webserver. You send snipits of the code to the interpreter via a web interface and they would faithfully be run on the backend (either on your local machine in a separate process or a server in the cloud). Results are then be returned back to the web interface and then rendered by the browser in one form or another.
Iodide’s model is quite similar, with one difference: instead of running the kernel in another process or somewhere in the cloud, all the heavy lifting happens in the browser itself, using the local JavaScript and/or WebAssembly runtime. The very first version of the product that Hamilton Ulmer and Brendan Colloran came up with was definitely in this category: it had no server-side component whatsoever.
Truth be told, even applications like Jupyter do a fair amount of computation on the client side to render a user interface and the results of computations: the distinction is not as clear cut as you might think. But in general I think the premise holds: if you load an iodide notebook today (still possible on alpha.iodide.io! no account required), the only thing that comes from the server is a little bit of static JavaScript and a flat file delivered as a JSON payload. All the “magic” of whatever computation you might come up with happens on the client.
Let’s take a quick look at the default tryit iodide notebook to give an idea of what I mean:
%% fetch
// load a js library and a csv file with a fetch cell
js: https://cdnjs.cloudflare.com/ajax/libs/d3/4.10.2/d3.js
text: csvDataString = https://data.sfgov.org/api/views/5cei-gny5/rows.csv?accessType=DOWNLOAD
%% js
// parse the data using the d3 library (and show the value in the console)
parsedData = d3.csvParse(csvDataString)
%% js
// replace the text in the report element "htmlInMd"
document.getElementById('htmlInMd').innerText = parsedData[0].Address
%% py
# use python to select some of the data
from js import parsedData
[x['Address'] for x in parsedData if x['Lead Remediation'] == 'true']
The %% delimiters indicate individual cells. The fetch cell is an iodide-native cell with its own logic to load the specified resources into a JavaScript variables when evaluated. js and py cells direct the browser to interpret the code inside of them, which causes the DOM to mutate. From these building blocks (and a few others), you can build up an interactive report which can also be freely forked and edited by anyone who cares to.
In some ways, I think Iodide has more in common with services like Glitch or Codepen than with Jupyter. In effect, it’s mostly offering a way to build up a static web page (doing whatever) using web technologies— even if the user interface affordances and cell-based computation model might remind you more of a scientific notebook.
What works well about this approach
There’s a few nifty things that come from doing things this way:
The environment is easily extensible for those familiar with JavaScript or other client side technologies. There is no need to mess with strange plugin architectures or APIs or conform to the opinions of someone else on what options there are for data visualization and presentation. If you want to use jQuery inside your notebook for a user interface widget, just import it and go!
The architecture scales to many, many users: since all the computation happens on the client, the server’s only role is to store and retrieve notebook content. alpha.iodide.io has happily been running on Heroku’s least expensive dyno type for its entire existence.
Reproducibility: so long as the iodide notebook has no references to data on third party servers with authentiction, there is no stopping someone from being able to reproduce whatever computations led to your results.
Related to reproducibility, it’s easy to build off of someone else’s notebook or exploration, since everything is so self-contained.
I continue to be surprised and impressed with what people come up with in the Jupyter ecosystem so I don’t want to claim these are things that “only Iodide” (or other tools like it) can do— what I will say is that I haven’t seen many things that combine both the conciseness and expressiveness of iomd. The beauty of the web is that there is such an abundance of tutorials and resources to support creating interactive content: when building iodide notebooks, I would freely borrow from resources such as MDN and Stackoverflow, instead of being locked into what the authoring software thinks one should be able express.
What’s awkward
Every silver lining has a cloud and (unfortunately) Iodide has a pretty big one. Depending on how you use Iodide, you will almost certainly run into the following problems:
You are limited by the computational affordances provided by the browser: there is no obvious way to offload long-running or expensive computations to a cluster of machines using a technology like Spark.
Long-running computations will block the main thread, causing your notebook to become extremely unresponsive for the duration.
The first, I (and most people?) can usually live with. Computers are quite powerful these days and most data tasks I’m interested in are easily within the range of capabilities of my laptop. In the cases where I need to do something larger scale, I’m quite happy to fire up a Jupyter Notebook, BigQuery Console, or <other tool of choice> to do the heavy-lifting, and then move back to a client-side approach to visualize and explore my results.
The second is much harder to deal with, since it means that the process of exploration that is so key to scientific computing. I’m quite happy to wait 5 seconds, 15 seconds, even a minute for a longer-running computation to complete but if I see the slow script dialog and everything about my environment grinds to a halt, it not only blocks my work but causes me to lose faith in the system. How do I know if it’s even going to finish?
This occurs way more often than you might think: even a simple notebook loading pandas (via pyodide) can cause Firefox to pop up the slow script dialog:
Why is this so? Aren’t browsers complex beasts which can do a million things in parallel these days? Only sort of: while the graphical and network portions of loading and using a web site are highly parallel, JavaScript and any changes to the DOM can only occur synchronously by default. Let’s break down a simple iodide example which trivially hangs the browser:
What’s going on here? When you execute that cell, the web site’s sole thread is now devoted to running that infinite loop. No longer can any other JavaScript-based event handler be run, so for example the text editor (which uses Monaco under the hood) and menus are now completely unresponsive.
The iodide team was aware of this issue since I joined the project. There were no end of discussions about how to work around it, but they never really come to a satisfying conclusion. The most obvious solution is to move the cell’s computation to a web worker, but workers don’t have synchronous access to the DOM which is required for web content to work as you’d expect. While there are projects like ComLink that attempt to bridge this divide, they require both a fair amount of skill and code to use effectively. As mentioned above, one of the key differentiators between iodide and other notebook environments is that tools like jQuery, d3, etc. “just work”. That is, you can take a code snipit off the web and run it inside an iodide notebook: there’s no workaround I’ve been able to find which maintains that behaviour and ensures that the Iodide notebook environment is always responsive.
It took a while for this to really hit home, but having some separation from the project, I’ve come to realize that the problem is that the underlying technology isn’t designed for the task we were asking of it, nor is it likely to ever be in the near future. While the web (by which I mean not only the browser, but the ecosystem of popular tooling and libraries that has been built on top of it) has certainly grown in scope to things it was never envisioned to handle like games and office applications, it’s just not optimized for what I’d call “editable content”. That is, the situation where a web page offers affordances for manipulating its own representation.
While modern web browsers have evolved to (sort of) protect one errant site from bringing the whole thing down, they certainly haven’t evolved to protect a particular site against itself. And why would they? Web developers usually work in the terminal and text editor: the assumption is that if their test code misbehaves, they’ll just kill their tab and start all over again. Application state is persisted either on-disk or inside an external database, so nothing will really be lost.
Could this ever change? Possibly, but I think it would be a radical rethinking of what the web is, and I’m not really sure what would motivate it.
It describes how people typically perform computational inquiry: typically you would poke around with some raw data, run some transformations on it. Only after that process was complete would you start trying to build up a “presentation” of your results to your audience.
Looking back, it’s clear that Iodide’s strong suit was the explanation part of this workflow, rather than collaboration and exploration. My strong suspicion is that we actually want to use different tools for each one of these tasks. Coincidentally, this also maps to the bulk of my experience working with data at Mozilla, using iodide or not: my most successful front-end data visualization projects were typically the distilled result of a very large number of adhoc explorations (python scripts, SQL queries, Jupyter notebooks, …). The actual visualization itself contains very little computational meat: basically “just enough” to let the user engage with the data fruitfully.
Unfortunately much of my work in this area uses semi-sensitive internal Mozilla data so can’t be publicly shared, but here’s one example:
I built this dashboard in a single day to track our progress (resolved/open bugs) when we were migrating our data platform from AWS to GCP. It was useful: it let us quickly visualize which areas needed more attention and gave upper management more confidence that we were moving closer to our goal. However, there was little computation going on in the client: most of the backend work was happening in our Bugzilla instance: the iodide notebook just does a little bit of post-processing to visualize the results in a useful way.
Along these lines, I still think there is a place in the world for an interactive visualization environment built on principles similar to iodide: basically a markdown editor augmented with primitives oriented around data exploration (for example, charts and data tables), with allowances to bring in any JavaScript library you might want. However, any in-depth data processing would be assumed to have mostly been run elsewhere, in advance. The editing environment could either be the browser or a code editor running on your local machine, per user preference: in either case, there would not be any really important computational state running in the browser, so things like dynamic reloading (or just closing an errant tab) should not cause the user to lose any work.
This would give you the best of both worlds: you could easily create compelling visualizations that are easy to share and remix at minimal cost (because all the work happens in the browser), but you could also use whatever existing tools work best for your initial data exploration (whether that be JavaScript or the more traditional Python data science stack). And because the new tool has a reduced scope, I think building such an environment would be a much more tractable project for an individual or small group to pursue.
More on this in the future, I hope.
Many thanks to Teon Brooks and Devin Bayly for reviewing an early draft of this post
Periodically the discussion comes up about pruning away old stored Firefox build artifacts in S3. Each build is tens of megabytes, multiply that by the number of platforms we support and the set of revisions we churn through on a daily basis, and pretty soon you’re talking about real money.
This came up recently in a discussion about removing the legacy taskcluster deployment — what do we actually lose by cutting back our archive of integration builds? The main reason to keep them around is to facilitate bisection testing with mozregression, to find out when a bug was introduced. Up to now, discussions about this have been a bit hand-wavey: we do keep logs about who’s accessing old builds, but it’s never been clear whether it was mozregression accessing them or something else.
Happily, now that mozregression has some telemetry, it’s a little easier to get some answers on what people are actually doing. This query gets the distribution of build ages (launched or bisected) over the past 6 months, at a month long granularity.1 Ages are relative to the date mozregression was launched: for example, if someone asked for a build from May 2019 in June 2020, the number would be “13”.
Unsurprisingly, Firefox shippable builds are the number one thing people try to bisect. Let’s take a little bit of a closer look at what’s going on there:
The median value is 1, which indicates that most people are bisecting builds within one month of the day in which mozregression was run. And the upper fence result is 6, suggesting that most of the time people are looking at a regression range that is within a 6 month range. However, looking more closely at the data points themselves (the little points in the chart above), there are a considerable number of outliers where a range greater than 20 months was asked for.
… which brings up to the question that we want to answer. Given that getting old builds isn’t that common (which we sort of knew already, based on the access patterns in the S3 logs), what is the impact of the times that we do? And it’s here where I have to throw up my hands and say “I don’t know” and suggest that we go back to empirical observation and user research.
You can go back to the thread I linked above, and see that core Firefox/Gecko developers find the ability to get a precise regression range for older revisions valuable. One thing that’s worth mentioning is that mozregression isn’t run that often, compared to a product that we ship: on the order of 50 to 100 times per a day. But when it comes to internal tooling, a small amount of use might have a big impact: if a mozregression invocation a developer a few hours (or more), that’s a real benefit to Firefox and Mozilla. The same argument might apply here, where a small number of bisections on older builds might have a disproportionate impact on the quality of the product.
I only added the telemetry to capture this information relatively recently, so we’re actually only looking at about a month of data in this post. We’ll have more complete results later this year. ↩