Showing posts tagged Mozilla

Learning about Psychological Safety at the Recurse Center

Oct 19th, 2021

Mozilla Recurse

Last summer, I took a 6-week sabbatical from my job to attend a virtual “programmers retreat” at the Recurse Center. I thought I’d write up some notes on the experience, with a particular lens towards what makes an environment suited towards learning, innovation, and personal growth.

Some context: I’m currently working as a software engineer at Mozilla, building out our data pipeline and analysis tooling. I’ve been at my current position for more than 10 years (my “anniversary” actually passed while I was out). I started out as a senior engineer in 2011, and was promoted to staff engineer in 2016. In tech-land, this is a really long tenure at a company. I felt like it was time to take a break from my day-to-day, explore some new ideas and concepts, and hopefully expose myself to a broader group of people in my field.

My original thinking was that I would mostly be spending this time building out an interactive computation environment I’ve been working on called Irydium. And I did quite a bit of that. However, I think the main thing I took away from this experience was some insight on what makes a remote environment for knowledge work really “click”. In particular, what makes somewhere feel psychologically safe, and how this feeling allows us to innovate and do our best work.

While the Recurse Center obviously has different goals than an organization that builds and delivers consumer software, I do think there are some things that it does that could be applied to Mozilla (and, likely, many other tech workplaces).

What is the Recurse Center?

Most succinctly, the Recurse Center is a “writer’s retreat for programmers”. It tries to provide an environment conducive to learning and creativity, an opportunity to refine your craft and learn new things, both from the act of programming itself and from interactions with the other like-minded people attending. The Recurse Center admits a wide variety of people, from those who have only been through a coding bootcamp to those who have been in the industry many years, like myself. The main admission criteria, from what I gather, are curiosity and friendliness.

Once admitted, you do a “batch”— either a mini (1 week), half-batch (6 weeks), or a full batch (12 weeks). I did a half-batch.

How does it work (during a global pandemic)?

The Recurse experience used to be entirely in-person, in a space in New York City - if you wanted to go, you needed to move there at least temporarily. Obviously that’s out the window during a Global Pandemic, and all activities are currently happening online. This was actually pretty ideal for me at this point in my life, as it allowed me to participate entirely remotely from my home in Hamilton, Ontario, Canada (near Toronto).

There’s a few elements that make “Virtual RC” tick:

Why does it work?

So far, what I’ve described probably sounds a lot like any remote tech workplace during the pandemic… and it sort of is! In some ways, my schedule and life while at Recurse didn’t feel all that different from my normal day-to-day. Wake up in the morning, drink coffee, meditate, work for roughly 8 hours, done. Qualitatively, however, my experience at Recurse felt unusually productive, and I learned a lot more than I expected to: not just the core stuff related to Irydium, but also unexpected new concepts like CRDTs, product design, and even how visual studio code syntax highlighting works.

What made the difference? Certainly, not having the normal pressures of a workplace helps - but I think there’s more to it than that. The way RC is constructed reinforces a sense of psychological safety which I think is key to learning and growth.

What is psychological safety and why should I care?

Psychological safety is a bit of a hot topic these days and there’s a lot of discussion about in management circles. I think it comes down to a feeling that you can take risks and “put yourself out there” without fear that you’ll be ignored, attacked, or ridiculed.

Why is this important? I would argue, because knowledge work is about building understanding — going from a place of not understanding to understanding. If you’re working on anything at all innovative, there is always an element of the unknown. In my experience, there is virtually always a sense of discomfort and uncertainty that goes along with that. This goes double when you’re working around and with people that you don’t know terribly well (and who might have far more experience than you). Are they going to make fun of you for not knowing a basic concept or for expressing an idea that’s “so wrong I don’t even know where to begin”? Or, just as bad, will you not get any feedback on your work at all?

In reality, except in truly toxic environments, you’ll rarely encounter outright abusive behaviour. But the isolation of remote work can breed similar feelings of disquiet and discomfort over time. My sense, after a year of working “hardcore” remote in COVID times, is that our normal workplace rituals of meetings, “stand ups”, and discussions over Slack don’t provide enough space for a meaningful sense of psychological safety to develop. They’re good enough for measuring progress towards agreed-upon goals but a true sense of belonging depends on less tightly scripted interactions among peers.

How the Recurse environment creates psychological safety

But the environment I described above isn’t that different from a workplace, is it? Speaking from my own experience, my coworkers at Mozilla are all pretty nice people. There’s also many channels for informal discussion at Mozilla, and of course direct messaging is always available (via Slack or Matrix). And yet, I still feel there is a pretty large gap between the two experiences. So what makes the difference? I’d say there were three important aspects of Recurse that really helped here: social rules, gentle prompts, and a closed space.

Social rules

There’s been a lot of discussion about community participation guidelines and standards of behaviour in workplaces. In general, these types of policies target really egregious behaviour like harassment: this is a pretty low bar. They aren’t, in my experience, sufficient to actually create an environment that actually feels safe.

The Recurse Center goes over and above a basic code of conduct, with four simple social rules:

These rules aren’t “commandments” and you’re not meant to feel shame for violating them. The important thing is that by being there, the rules create an environment conducive to learning and growth. You can be reasonably confident that you can bring up a question or discussion point (or respond to one) and it won’t lead to a bad outcome. For example, you can expect not to be made fun of for asking what a UNIX socket is (and if you are, you can tell the person doing so to stop). Rather than there being an unspoken rule that everyone should already know everything about what they are trying to do, there is a spoken rule that states it’s expected that they don’t.

Working on Irydium, there’s an infinite number of ways I can feel incompetent: this is a requirement when engaging with concepts that I still don’t feel completely comfortable with: parsers, compilers, WebAssembly… the list goes on. Knowing that I could talk about what I’m working on (or something I’m interested in) and that the responses I got would be constructive and directed to the project, not the person made all the difference.1

Gentle prompts

The thing I loved the most about Recurse were the gentle prompts to engage with other people, talk about your work, and get help. A few that I really enjoyed during my time there:

… and I could go on. What’s important are not the specific activities, but their end effect of building connectedness, creating opportunities for serendipitous collaboration and interaction (more than one discussion group came out of someone’s checkin post on Zulip) and generally creating an environment well-suited to learning.

A (semi) closed space

One of the things that makes the gentle prompts above “work” is that you have some idea of who you’re going to be interacting with. Having some predictability about who’s going to see what you post and engage with you (that they were vetted by RC’s interview process and are committed to the above-mentioned social rules) gives you some confidence to be vulnerable and share things that you might be reluctant to otherwise.

Those who have known me for a while will probably see the above as being a bit of departure from what I normally preach: throughout my tenure at Mozilla, I’ve constantly pushed the people I’ve worked with to do more work in public. In the case of a product like Firefox, which touches so many people, I think open and transparent practices are absolutely essential to building trust, creating opportunity, and ensuring that our software reflects a diversity of views. I applied the same philosophy to Irydium’s development while I was at the Recurse Center: I set up a public Matrix channel to discuss the project, published all my work on GitHub, and was quite chatty about what I was working on, both in this blog and on Twitter.

The key, I think, is being deliberate about what approach you take when: there is a place for both public and private conversations about what we work on. I’m strongly in favour of open design documents, community calls, public bug trackers and open source in general. But I think it’s also pretty ok to have smaller spaces for learning, personal development, and question asking. I know I strongly appreciated having a smaller group of people that I could talk to about ideas that were not yet fully formed: you can always bring them out into the open later. The psychological risk of working in public can be mitigated by the psychological safety that can be developed within an intentional community.

Bringing it back

Returning to my job, I wondered if it might be possible to bring some of what I described above back to Mozilla? Obviously not everything would be directly transferable: Mozilla has its own mission and goals, and there are pressures that exist in a workplace that do not exist in an environment purely directed at learning. Still, I suspected that there was something we could do here. And that it would be worth doing, not just to improve the felt experience of the people here (though that would be reason enough) but also to get more feedback on our work and create more opportunities for collaboration and innovation.

I felt like trying to do something inside our particular organization (Data Engineering and Data Science) would be the most tractable initial step. I talked a bit about my experience with Will Kahn-Green (who has been at Mozilla around the same length of time as I have) and we came up with what we called the “Data Neighbourhood” project: a set of grassroots micro-initiatives to increase our connectedness as a group. As an organization directed primarily at serving other parts of Mozilla, most of our team’s communication is directed outward. It’s often hard to know what everyone else is up to, where they’re struggling, and how we could help each other out. Attacking that problem directly seemed like the best place to start.

The first experiment we tried was a “data checkins” channel on Slack, a place for people to talk informally about their work (or life!). I explicitly set it up with a similar set of social rules as outlined above and tried to emphasize that it was a place to talk about how things are going, rather than a place to report status to your manager. After a somewhat slow start (the initial posts were from Will, myself, and a few other people from Data Engineering who had been around for a long time) we’re beginning to see engagement from others, including some newer people I hadn’t interacted with much before. There’s also been a few useful threads of conversations across different sub-teams (for example, a discussion on how we identify distinct versions of Firefox for iOS) that likely would not have happened without the channel.

Since then, others have tried a few other things in the same vein (an adhoc coffee chat pairing bot, a “writing help” channel) and there are some signs of success. There’s clearly an appetite for new and better ways for us to relate to each other about the work we’re doing, and I’m excited to see how these ideas evolve over time.

I suspect there are limits to how psychologically safe a workplace can ever feel (and some of that is probably outside of any individual’s control). There are dynamics in a workplace which make applying some of Recurse’s practices difficult. In particular, a posture of “not knowing things is o.k.” may not apply perfectly to a workplace where people are hired (and promoted) based on perceived competence and expertise. Still, I think it’s worth investigating what might be possible within the constraints of the system we’re in. There are big potential benefits, for our creative output and our well-being.

Many thanks to Jenny Zhang, Kathleen Beckett, Joe Trellick, Taylor Phebillo and Vaibhav Sagar, and Will Kahn-Greene for reviewing earlier drafts of this post

  1. This is generally considered best practice inside workplaces as well. For example, see Google’s guide on how to write code review comments


Python dependency gotchas: always go to the source

Aug 16th, 2021

Mozilla Python

Getting back into the swing of things at Mozilla after my extended break. I’m currently working on enhancing and extending Looker support for Glean-based applications, which eventually led me back to working on bigquery-etl, our framework for creating derived datasets in our data lake.

I spent some time working on improving the initial developer experience of bigquery-etl early this year, so I figured it would be no problem to get going again despite an extended hiatus from it (I think it’s probably been ~2–3 months since I last touched it). Unfortunately the first thing I got after creating a fresh virtual environment (to pick up the new dependency updates) was this exciting looking error:

wlach@antwerp bigquery-etl % ./bqetl --help
Traceback (most recent call last):
  ...
  File "/Users/wlach/src/bigquery-etl/venv/lib/python3.9/site-packages/google/cloud/bigquery_v2/types/__init__.py", line 16, in <module>
    from .encryption_config import EncryptionConfiguration
  File "/Users/wlach/src/bigquery-etl/venv/lib/python3.9/site-packages/google/cloud/bigquery_v2/types/encryption_config.py", line 26, in <module>
    class EncryptionConfiguration(proto.Message):
  File "/Users/wlach/src/bigquery-etl/venv/lib/python3.9/site-packages/proto/message.py", line 200, in __new__
    file_info = _file_info._FileInfo.maybe_add_descriptor(filename, package)
  File "/Users/wlach/src/bigquery-etl/venv/lib/python3.9/site-packages/proto/_file_info.py", line 42, in maybe_add_descriptor
    descriptor=descriptor_pb2.FileDescriptorProto(
TypeError: descriptor to field 'google.protobuf.FileDescriptorProto.name' doesn't apply to 'FileDescriptorProto' object

What I did

Since we have pretty decent continuous integration at Mozilla, when I see an error like this I am usually pretty sure it’s some kind of strange interaction between my local development environment and whatever dependencies we’ve specified for the repository in question. Usually these problems are pretty easy to solve.

First thing I tried was to type the error into Google, to see if this had come up for anyone else before. I tried several variations of TypeError: descriptor to field and FileDescriptorProto and nothing really turned up. This strategy almost always turns up something. When it doesn’t it usually indicates that something pretty strange is happening.

To see if this was a strange problem particular to us, I asked on our internal channel but no one had offhand seen or heard of this error either. One of my colleagues (who had a working setup on a Mac, the same environment I was using) suggested I set up pyenv to isolate my development environment, which was a good idea but did not seem to solve the problem: both Python 3.8 and 3.9 installed via pyenv ran into the exact same issue.

After flailing around trying a number of other failed approaches (maybe I need to upgrade the version of virtualenv that we’re using?), I broke down and looked harder at the error itself. It seemed to be some kind of typing error in Google’s protobuf library, which google-cloud-bigquery is calling. If this sort of thing was happening to everyone, we probably would have seen it happening more broadly. So my guess, again, was that it was happening due to an obscure interaction between some variable on my machine and this particular combination of dependencies.

At this point, I systematically went through our set of python dependencies to see what might be the matter. For the most part, I found nothing surprising or suspicious. google-api-core was at the latest version, as was google-cloud-bigquery. However, I did notice that the version of protobuf we were using was a little older (3.15.8 when the latest “official” version on pypi was 3.17.3).

It seemed like a longshot that the problem was there, but it seemed like upgrading the dependency was worth a try just in case. So I bumped the version of protobuf to the latest version in my local checkout (pip install protobuf==3.17.3)…

… and sure enough, after doing so, the problem was fixed and ./bqetl --help started working again:

wlach@antwerp bigquery-etl % ./bqetl --help
Usage: bqetl [OPTIONS] COMMAND [ARGS]...

  CLI tools for working with bigquery-etl.

...

After doing so, I did up a quick pull request and the problem is now fixed, at least for me.

It’s a bit unfortunate that dependabot (which we have configured for this repository) didn’t send an update for protobuf, which would have fixed this problem earlier.1 It seems like it’s not completely reliable for python packages, for whatever reason: I have also noticed this problem with mozregression.

I suspect (though can’t confirm) that the problem here is a backwards-incompatible change made to either protobuf or one of the packages that uses it. However, the nature of the incompatibility seems subtle: bigquery-etl works fine with the old set of dependencies we run in continuous integration and it appears to only come up in specific circumstances (i.e. mine). Unfortunately, I need to get back to what I was actually planning to work on and don’t have time to unwind the rather set of complex interactions going on here. Maybe later!

What I would have done differently

This kind of illustrates (again) to me that while some shortcuts and heuristics can save a bunch of time and mental effort (Googling things all the time is basically standard practice in the industry at this point), sometimes you really just need to start a little closer at the problem to find a solution. I was hesitant to do this in this case because I’m never sure where those kinds of rabbit holes are going to take me (e.g. I spent several days debugging a bad interaction between Kubernetes and our airflow cluster in late 2019 with not much to show for the effort), but often all it takes is understanding the general shape of the problem to move you to a quick solution.

Other lessons

Here’s a couple of other things this experience reinforced for me (these are more subjective, take them or leave them):

  1. As an aside, the main reason we use dependabot and aggressively update packages like google-api-core is due to a bug in pip


10 years at Mozilla

Jul 12th, 2021

Mozilla Recurse

Yesterday (July 11, 2021) was the 10 year anniversary of starting at the Mozilla Corporation. My life has changed a ton in those years: in that time I ended a marriage, changed the city in which I live two times, and took up religion1. Mozilla has also changed pretty drastically in my time here, especially in the last year.

Yet somehow I’m still at it, for more or less for the same reasons that led me to accept my initial offer to join the A-team.2 The Internet has the immense potential to be a force for individual empowerment and yet more than ever, we see this technology used to consolidate unchecked power, spread misinformation, and generally exploit people. Mozilla is not perfect (no organization is: 10 years anywhere will teach you that), but it’s one of the few remaining counter-forces to these accelerating trends. While I’m currently taking a bit of a break to explore some stuff on my own, I am looking forward to getting back to work on the mission when I return in mid-August.

  1. To the extent that Zen Buddhism is a religion. 

  2. I’ve since moved to Data @ Mozilla 


Mini-sabbatical and introducing Irydium

Jun 23rd, 2021

Mozilla Recurse Irydium

Approaching my 10-year moz-iversary in July, I’ve decided it’s time to take a bit of a mini-sabbatical: I’ll be out (and trying as hard as possible not to check bugmail) from Friday, June 25th until August 9th. During this time, I’ll be doing a batch at the Recurse Centre (something like a writer’s retreat for programmers), exploring some of my interests around data visualization and analysis that don’t quite fit into my role as a Data Engineer here at Mozilla.

In particular, I’m planning to work a bunch on a project tentatively called “Irydium”, which pursues some of the ideas I sketched out last year in my Iodide retrospective and a few more besides. I’ve been steadily working on it in my off hours, but it’s become clear that some of the things I want to pursue would benefit from more dedicated attention and the broader perspective that I’m hoping the Recurse community will be able to provide.

I had meant to write up a proper blog post to announce the project before I left, but it looks like I’m pretty much out of time. Instead, I’ll just offer up the examples on the newly-minted irydium.dev and invite people to contact me if any of the ideas on the site sounds interesting. I’m hoping to blog a whole bunch while I’m there, but probably not under the Mozilla tag. Feel free to add wrla.ch to your RSS feed if you want to follow what I’m up to!


Glean Dictionary updates

Jun 2nd, 2021

Mozilla Glean

(this is a cross-post from the data blog)

Lots of progress on the Glean Dictionary since I made the initial release announcement a couple of months ago. For those coming in late, the Glean Dictionary is intended to be a data dictionary for applications built using the Glean SDK and Glean.js. This currently includes Firefox for Android and Firefox iOS, as well as newer initiatives like Rally. Desktop Firefox will use Glean in the future, see Firefox on Glean (FoG).

Production URL

We’re in production! You can now access the Glean Dictionary at dictionary.telemetry.mozilla.org. The old protosaur-based URL will redirect.

Glean Dictionary + Looker = ❤️

At the end of last year, Mozilla chose Looker as our internal business intelligence tool. Frank Bertsch, Daniel Thorn, Anthony Miyaguchi and others have been building out first class support for Glean applications inside this platform, and we’re starting to see these efforts bear fruit. Looker’s explores are far easier to use for basic data questions, opening up data based inquiry to a much larger cross section of Mozilla.

I recorded a quick example of this integration here:

Note that Looker access is restricted to Mozilla employees and NDA’d volunteers. Stay tuned for more public data to be indexed inside the Glean Dictionary in the future.

Glean annotations!

I did up the first cut of a GitHub-based system for adding annotations to metrics — acting as a knowledge base for things data scientists and others have discovered about Glean Telemetry in the field. This can be invaluable when doing new analysis. A good example of this is the annotation added for the opened as default browser metric for Firefox for iOS, which has several gotchas:

Many thanks to Krupa Raj and Leif Oines for producing the requirements which led up to this implementation, as well as their evangelism of this work more generally inside Mozilla. Last month, Leif and I did a presentation about this at Data Club, which has been syndicated onto YouTube:

Since then, we’ve had a very successful working session with some people Data Science and have started to fill out an initial set of annotations. You can see the progress in the glean-annotations repository.

Other Improvements

Lots more miscellaneous improvements and fixes have gone into the Glean Dictionary in the last several months: see our releases for a full list. One thing that irrationally pleases me are the new labels Linh Nguyen added last week: colorful and lively, they make it easy to see when a Glean Metric is coming from a library:

Future work

The Glean Dictionary is just getting started! In the next couple of weeks, we’re hoping to:

If you’re interested in getting involved, join us! The Glean Dictionary is developed in the open using cutting edge front-end technologies like Svelte. Our conviction is that being transparent about the data Mozilla collects helps us build trust with our users and the community. We’re a friendly group and hang out on the #glean-dictionary channel on Matrix.


mozregression update May 2021

May 10th, 2021

Mozilla Glean Telemetry mozregression

Just wanted to give some quick updates on the state of mozregression.

Anti-virus false positives

One of the persistent issues with mozregression is that it seems to be persistently detected as a virus by many popular anti-virus scanners. The causes for this are somewhat complex, but at root the problem is that mozregression requires fairly broad permissions to do the things it needs to do (install and run copies of Firefox) and thus its behavior is hard to distinguish from a piece of software doing something malicious.

Recently there have been a number of mitigations which seem to be improving this situation:

It’s tempting to lament the fact that this is happening, but in a way I can understand it’s hard to reliably detect what kind of software is legitimate and what isn’t. I take the responsibility for distributing this kind of software seriously, and have pretty strict limits on who has access to the mozregression GitHub repository and what pull requests I’ll merge.

CI ported to GitHub Actions

Due to changes in Travis’s policies, we needed to migrate continuous integration for mozregression to GitHub actions. You can see the gory details in bug 1686039. One possibly interesting wrinkle to others: due to Mozilla’s security policy, we can’t use (most) external actions inside our GitHub repository. I thus rewrote the logic for uploading a mozregression release to GitHub for MacOS and Linux GUI builds (Windows builds are still happening via AppVeyor for now) from scratch. Feel free to check the above out if you have a similar need.

MacOS Big Sur

As of version 4.0.17, the mozregression GUI now works on MacOS Big Sur. It is safe to ask community members to install and use it on this platform (though note the caveats due to the bundle being unsigned).

Usage Dashboard

Fulfilling a promise I implied last year, I created a public dataset for mozregression and created an dashboard tracking mozregression use using Observable. There are a few interesting insights and trends there that can be gleaned from our telemetry. I’d be curious if the community can find any more!


Blog moving back to wrla.ch

Mar 21st, 2021

Mozilla meta

House keeping news: I’m moving this blog back to the wrla.ch domain from wlach.github.io. This domain sorta kinda worked before (I set up a netlify deploy a couple years ago), but the software used to generate this blog referenced github all over the place in its output, so it didn’t really work as you’d expect. Anyway, this will be the last entry published on wlach.github.io: my plan is to turn that domain into a set of redirects in the future.

I don’t know how many of you are out there who still use RSS, but if you do, please update your feeds. I have filed a bug to update my Planet Mozilla entry, so hopefully the change there will be seamless.

Why? Recent events have made me not want to tie my public web presence to a particular company (especially a larger one, like Microsoft). I don’t have any immediate plans to move this blog off of github, but this gives me that option in the future. For those wondering, the original rationale for moving to github is in this post. Looking back, the idea of moving away from a VPS and WordPress made sense, the move away from my own domain less so. I think it may have been harder to set up static hosting (esp. with HTTPS) at that time… or I might have just been ignorant.

In related news, I decided to reactivate my twitter account: you can once again find me there as @wrlach (my old username got taken in my absence). I’m not totally thrilled about this (I basically stand by what I wrote a few years ago, except maybe the concession I made to Facebook being “ok”), but Twitter seems to be where my industry peers are. As someone who doesn’t have a large organic following, I’ve come to really value forums where I can share my work. That said, I’m going to be very selective about what I engage with on that site: I appreciate your understanding.


Community @ Mozilla: People First, Open Source Second

Feb 24th, 2021

Mozilla

It’s coming up on ten years at Mozilla for me, by far the longest I’ve held any job personally and exceedingly long by the standards of the technology industry. When I joined up in Summer 2011 to work on Engineering Productivity1, I really did see it as a dream job: I’d be paid to work full time on free software, which (along with open data and governance) I genuinely saw as one of the best hopes for the future. I was somewhat less sure about Mozilla’s “mission”: the notion of protecting the “open web” felt nebulous and ill-defined and the Mozilla Manifesto seemed vague when it departed from the practical aspects of shipping a useful open source product to users.

It seems ridiculously naive in retrospect, but I can remember thinking at the time that the right amount of “open source” would solve all the problems. What can I say? It was the era of the Arab Spring, WikiLeaks had not yet become a scandal, Google still felt like something of a benevolent upstart, even Facebook’s mission of “making the world more connected” sounded great to me at the time. If we could just push more things out in the open, then the right solutions would become apparent and fixing the structural problems society was facing would become easy!

What a difference a decade makes. The events of the last few years have demonstrated (conclusively, in my view) that open systems aren’t necessarily a protector against abuse by governments, technology monopolies and ill-intentioned groups of individuals alike. Amazon, Google and Facebook are (still) some of the top contributors to key pieces of open source infrastructure but it’s now beyond any doubt that they’re also responsible for amplifying a very large share of the problems global society is experiencing.

At the same time, some of the darker sides of open source software development have become harder and harder to ignore. In particular:

All this has led to a personal crisis of faith. Do openness and transparency inherently lead to bad outcomes? Should I continue to advocate for it in my position? As I mentioned above, the opportunity to work in the open with the community is the main thing that brought me to Mozilla— if I can’t find a way of incorporating this viewpoint into my work, what am I even doing here?

Trying to answer these questions, I went back to the manifesto that I just skimmed over in my early days. Besides openness — what are Mozilla’s values, really, and do I identify with them? Immediately I was struck by how much it felt like it was written explicitly for the present moment (even aside from the addendums which were added in 2018). Many points seem to confront problems we’re grappling with now which I was only beginning to perceive ten years ago.

Beyond that, there was also something that resonated with me on a deeper level. There were a few points, highlighted in bold, that really stood out:

  1. The internet is an integral part of modern life—a key component in education, communication, collaboration, business, entertainment and society as a whole.
  2. The internet is a global public resource that must remain open and accessible.
  3. The internet must enrich the lives of individual human beings.
  4. Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.
  5. Individuals must have the ability to shape the internet and their own experiences on the internet.
  6. The effectiveness of the internet as a public resource depends upon interoperability (protocols, data formats, content), innovation and decentralized participation worldwide.
  7. Free and open source software promotes the development of the internet as a public resource.
  8. Transparent community-based processes promote participation, accountability and trust.
  9. Commercial involvement in the development of the internet brings many benefits; a balance between commercial profit and public benefit is critical.
  10. Magnifying the public benefit aspects of the internet is an important goal, worthy of time, attention and commitment.

I think it’s worth digging beneath the surface of these points: what is the underlying value system behind them? I’d argue it’s this, simply put: human beings really do matter. They’re not just line items in a spreadsheet or some other resource to be optimized. They are an end in of themselves. People (more so than a software development methodology) are the reason why I show up every day to do the work that I do. This is really an absolute which has enduring weight: it’s a foundational truth of every major world religion to say nothing of modern social democracy.

What does working and building in then open mean then? As we’ve seen above, it certainly isn’t something I’d consider “good” all by itself. Instead, I’d suggest it’s a strategy which (if we’re going to follow it) should come out of that underlying recognition of the worth of Mozilla’s employees, community members, and users. Every single one of these people matter, deeply. I’d argue then, that Mozilla should consider the following factors in terms of how we work in the open:

None of the above is easy to address. But the alternatives are either close everything down to public participation (which I’d argue will lead to the death of Mozilla as an organization: it just doesn’t have the resources to compete in the marketplace without the backing of the community) or continue down the present path (which I don’t think is sustainable either). The last ten years have shown that the “open source on auto-pilot” approach just doesn’t work.

I suspect these problems aren’t specific to Mozilla and affect other communities that work in the open. I’d be interested in hearing other perspectives on this family of problems: if you have anything to add, my contact information is below.

  1. I’ve since moved to the Data team

  2. This includes our internal communications channels like our Matrix instance as well as issue trackers like Bugzilla. There’s also a question of what to do about non-Mozilla channels, like Twitter or the Orange Site. Although not Mozilla spaces, these places are often vectors for harassment of community members. I don’t have any good answers for what to do about this, aside from offering my solidarity and support to those suffering abuse on these channels. Disagreement with Mozilla’s strategy or policy is one thing, but personal attacks, harassment, and character assasination is never ok— no matter where it’s happening. 


The Glean Dictionary

Jan 27th, 2021

Mozilla Glean

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

On behalf of Mozilla’s Data group, I’m happy to announce the availability of the first milestone of the Glean Dictionary, a project to provide a comprehensive “data dictionary” of the data Mozilla collects inside its products and how it makes use of it. You can access it via this development URL:

https://dictionary.protosaur.dev/

The goal of this first milestone was to provide an equivalent to the popular “probe” dictionary for newer applications which use the Glean SDK, such as Firefox for Android. As Firefox on Glean (FoG) comes together, this will also serve as an index of what data is available for Firefox and how to access it.

Part of the vision of this project is to act as a showcase for Mozilla’s practices around lean data and data governance: you’ll note that every metric and ping in the Glean Dictionary has a data review associated with it — giving the general public a window into what we’re collecting and why.

In addition to displaying a browsable inventory of the low-level metrics which these applications collect, the Glean Dictionary also provides:

Over the next few months, we’ll be expanding the Glean Dictionary to include derived datasets and dashboards / reports built using this data, as well as allow users to add their own annotations on metric behaviour via a GitHub-based documentation system. For more information, see the project proposal.

The Glean Dictionary is the result of the efforts of many contributors, both inside and outside Mozilla Data. Special shout-out to Linh Nguyen, who has been moving mountains inside the codebase as part of an Outreachy internship with us. We welcome your feedback and involvement! For more information, see our project repository and Matrix channel (#glean-dictionary on chat.mozilla.org).


iodide retrospective

Nov 10th, 2020

Mozilla iodide

A bit belatedly, I thought I’d write a short retrospective on Iodide: an effort to create a compelling, client-centred scientific notebook environment.

Despite not writing a ton about it (the sole exception being this brief essay about my conservative choice for the server backend) Iodide took up a very large chunk of my mental energy from late 2018 through 2019. It was also essentially my only attempt at working on something that was on its way to being an actual product while at Mozilla: while I’ve led other projects that have been interesting and/or impactful in my 9 odd-years here (mozregression and perfherder being the biggest successes), they fall firmly into the “internal tools supporting another product” category.

At this point it’s probably safe to say that the project has wound down: no one is being paid to work on Iodide and it’s essentially in extreme maintenance mode. Before it’s put to bed altogether, I’d like to write a few notes about its approach, where I think it had big advantanges, and where it seems to fall short. I’ll conclude with some areas I’d like to explore (or would like to see others explore!). I’d like to emphasize that this is my opinion only: the rest of the Iodide core team no doubt have their own thoughts. That said, let’s jump in.

What is Iodide, anyway?

One thing that’s interesting about a product like Iodide is that people tend to project their hopes and dreams onto it: if you asked any given member of the Iodide team to give a 200 word description of the project, you’d probably get a slightly different answer emphasizing different things. That said, a fair initial approximation of the project vision would be “a scientific notebook like Jupyter, but running in the browser”.

What does this mean? First, let’s talk about what Jupyter does: at heart, it’s basically a Python “kernel” (read: interpreter), fronted by a webserver. You send snipits of the code to the interpreter via a web interface and they would faithfully be run on the backend (either on your local machine in a separate process or a server in the cloud). Results are then be returned back to the web interface and then rendered by the browser in one form or another.

Iodide’s model is quite similar, with one difference: instead of running the kernel in another process or somewhere in the cloud, all the heavy lifting happens in the browser itself, using the local JavaScript and/or WebAssembly runtime. The very first version of the product that Hamilton Ulmer and Brendan Colloran came up with was definitely in this category: it had no server-side component whatsoever.

Truth be told, even applications like Jupyter do a fair amount of computation on the client side to render a user interface and the results of computations: the distinction is not as clear cut as you might think. But in general I think the premise holds: if you load an iodide notebook today (still possible on alpha.iodide.io! no account required), the only thing that comes from the server is a little bit of static JavaScript and a flat file delivered as a JSON payload. All the “magic” of whatever computation you might come up with happens on the client.

Let’s take a quick look at the default tryit iodide notebook to give an idea of what I mean:

%% fetch
// load a js library and a csv file with a fetch cell
js: https://cdnjs.cloudflare.com/ajax/libs/d3/4.10.2/d3.js
text: csvDataString = https://data.sfgov.org/api/views/5cei-gny5/rows.csv?accessType=DOWNLOAD

%% js
// parse the data using the d3 library (and show the value in the console)
parsedData = d3.csvParse(csvDataString)

%% js
// replace the text in the report element "htmlInMd"
document.getElementById('htmlInMd').innerText = parsedData[0].Address

%% py
# use python to select some of the data
from js import parsedData
[x['Address'] for x in parsedData if x['Lead Remediation'] == 'true']

The %% delimiters indicate individual cells. The fetch cell is an iodide-native cell with its own logic to load the specified resources into a JavaScript variables when evaluated. js and py cells direct the browser to interpret the code inside of them, which causes the DOM to mutate. From these building blocks (and a few others), you can build up an interactive report which can also be freely forked and edited by anyone who cares to.

In some ways, I think Iodide has more in common with services like Glitch or Codepen than with Jupyter. In effect, it’s mostly offering a way to build up a static web page (doing whatever) using web technologies— even if the user interface affordances and cell-based computation model might remind you more of a scientific notebook.

What works well about this approach

There’s a few nifty things that come from doing things this way:

I continue to be surprised and impressed with what people come up with in the Jupyter ecosystem so I don’t want to claim these are things that “only Iodide” (or other tools like it) can do— what I will say is that I haven’t seen many things that combine both the conciseness and expressiveness of iomd. The beauty of the web is that there is such an abundance of tutorials and resources to support creating interactive content: when building iodide notebooks, I would freely borrow from resources such as MDN and Stackoverflow, instead of being locked into what the authoring software thinks one should be able express.

What’s awkward

Every silver lining has a cloud and (unfortunately) Iodide has a pretty big one. Depending on how you use Iodide, you will almost certainly run into the following problems:

The first, I (and most people?) can usually live with. Computers are quite powerful these days and most data tasks I’m interested in are easily within the range of capabilities of my laptop. In the cases where I need to do something larger scale, I’m quite happy to fire up a Jupyter Notebook, BigQuery Console, or <other tool of choice> to do the heavy-lifting, and then move back to a client-side approach to visualize and explore my results.

The second is much harder to deal with, since it means that the process of exploration that is so key to scientific computing. I’m quite happy to wait 5 seconds, 15 seconds, even a minute for a longer-running computation to complete but if I see the slow script dialog and everything about my environment grinds to a halt, it not only blocks my work but causes me to lose faith in the system. How do I know if it’s even going to finish?

This occurs way more often than you might think: even a simple notebook loading pandas (via pyodide) can cause Firefox to pop up the slow script dialog:

Why is this so? Aren’t browsers complex beasts which can do a million things in parallel these days? Only sort of: while the graphical and network portions of loading and using a web site are highly parallel, JavaScript and any changes to the DOM can only occur synchronously by default. Let’s break down a simple iodide example which trivially hangs the browser:

%% js

while (1) { true }

link (but you really don’t want to)

What’s going on here? When you execute that cell, the web site’s sole thread is now devoted to running that infinite loop. No longer can any other JavaScript-based event handler be run, so for example the text editor (which uses Monaco under the hood) and menus are now completely unresponsive.

The iodide team was aware of this issue since I joined the project. There were no end of discussions about how to work around it, but they never really come to a satisfying conclusion. The most obvious solution is to move the cell’s computation to a web worker, but workers don’t have synchronous access to the DOM which is required for web content to work as you’d expect. While there are projects like ComLink that attempt to bridge this divide, they require both a fair amount of skill and code to use effectively. As mentioned above, one of the key differentiators between iodide and other notebook environments is that tools like jQuery, d3, etc. “just work”. That is, you can take a code snipit off the web and run it inside an iodide notebook: there’s no workaround I’ve been able to find which maintains that behaviour and ensures that the Iodide notebook environment is always responsive.

It took a while for this to really hit home, but having some separation from the project, I’ve come to realize that the problem is that the underlying technology isn’t designed for the task we were asking of it, nor is it likely to ever be in the near future. While the web (by which I mean not only the browser, but the ecosystem of popular tooling and libraries that has been built on top of it) has certainly grown in scope to things it was never envisioned to handle like games and office applications, it’s just not optimized for what I’d call “editable content”. That is, the situation where a web page offers affordances for manipulating its own representation.

While modern web browsers have evolved to (sort of) protect one errant site from bringing the whole thing down, they certainly haven’t evolved to protect a particular site against itself. And why would they? Web developers usually work in the terminal and text editor: the assumption is that if their test code misbehaves, they’ll just kill their tab and start all over again. Application state is persisted either on-disk or inside an external database, so nothing will really be lost.

Could this ever change? Possibly, but I think it would be a radical rethinking of what the web is, and I’m not really sure what would motivate it.

A way forward

While working on iodide, we were fond of looking at this diagram, which was taken from this study of the data science workflow:

It describes how people typically perform computational inquiry: typically you would poke around with some raw data, run some transformations on it. Only after that process was complete would you start trying to build up a “presentation” of your results to your audience.

Looking back, it’s clear that Iodide’s strong suit was the explanation part of this workflow, rather than collaboration and exploration. My strong suspicion is that we actually want to use different tools for each one of these tasks. Coincidentally, this also maps to the bulk of my experience working with data at Mozilla, using iodide or not: my most successful front-end data visualization projects were typically the distilled result of a very large number of adhoc explorations (python scripts, SQL queries, Jupyter notebooks, …). The actual visualization itself contains very little computational meat: basically “just enough” to let the user engage with the data fruitfully.

Unfortunately much of my work in this area uses semi-sensitive internal Mozilla data so can’t be publicly shared, but here’s one example:

link

I built this dashboard in a single day to track our progress (resolved/open bugs) when we were migrating our data platform from AWS to GCP. It was useful: it let us quickly visualize which areas needed more attention and gave upper management more confidence that we were moving closer to our goal. However, there was little computation going on in the client: most of the backend work was happening in our Bugzilla instance: the iodide notebook just does a little bit of post-processing to visualize the results in a useful way.

Along these lines, I still think there is a place in the world for an interactive visualization environment built on principles similar to iodide: basically a markdown editor augmented with primitives oriented around data exploration (for example, charts and data tables), with allowances to bring in any JavaScript library you might want. However, any in-depth data processing would be assumed to have mostly been run elsewhere, in advance. The editing environment could either be the browser or a code editor running on your local machine, per user preference: in either case, there would not be any really important computational state running in the browser, so things like dynamic reloading (or just closing an errant tab) should not cause the user to lose any work.

This would give you the best of both worlds: you could easily create compelling visualizations that are easy to share and remix at minimal cost (because all the work happens in the browser), but you could also use whatever existing tools work best for your initial data exploration (whether that be JavaScript or the more traditional Python data science stack). And because the new tool has a reduced scope, I think building such an environment would be a much more tractable project for an individual or small group to pursue.

More on this in the future, I hope.

Many thanks to Teon Brooks and Devin Bayly for reviewing an early draft of this post