Spent a good chunk of a week off (re)learning the basics of neural networks: forward propagation, gradient descent, loss functions. It’s taking some time, but gradually I feel some level of understanding is taking hold. What kind of continues to amaze me is how simple it all is: you really only need a high-school level understanding of linear algebra and calculus to understand most of what’s going on behind the scenes. Best as I can tell, the recent innovations in the last few years (in particular transformer models behind things like ChatGPT) are just refinements on top of these basic concepts.
Neural networks (there really there is nothing “neural” about them) are really not new: I remember hearing about that as an undergraduate in the early 2000s (and I think they were rather old hat even then). At the time, they were pretty much dismissed as a warmed-over model of behaviorism, unlikely to be useful anywhere except perhaps in a few simplistic applications. Based on what I saw at the time, I agreed and basically bought into the idea that computers are mainly useful as an adjunct to human processes, systems and intuition.
Thus, I find the fact that these systems can produce something even mildly resembling novel or creative outputs (as is the case with things like ChatGPT and Midjourney) surprising - as in it wasn’t something I saw coming. Yes, much of what has been built using these technologies is overhyped and arguably dangerous. Still, I also don’t want to lose the sense of wonder that this is possible at all. If I was mistaken about this what else might I be missing?
I feel like the best response at this point is to take a step back, learn as much as I can, and then develop an opinion. I expect this process to take at least a year, probably longer.
In case it’s helpful to others, here’s some literature I’ve been working through on these topics.
Some theoretical but approachable material for understanding the basics:
Hacker’s Guide to Neural Networks: Now dislcaimed by the author as out of date, I still found its explanation of neural networks as a circuit quite helpful for building my intuition.
On the Dangers of Stochastic Parrots: Good antidote to some of the hype around LLMs, incorporating an understanding of how these systems may further perpetuate social harms. Much more nuanced and interesting than most of the critiques I’ve seen fly by in the last few months.
Resisting Determinstic Thinking: How can we think critically about AI without falling into the trap of black/white thinking (“this is all good” vs. “this is all bad”)
And some articles on how to think about LLMs from a pragmatic perspective as a programmer:
Cheating is all you need: Some speculative thinking about how this stuff might play out for those of us doing programming from the internet-famous Steve Yegge.
Been struggling with depression over the past couple of weeks. Some of this is seasonal (with the shortening of the days), though I wouldn’t say it always happens. Last year at this time I recall feeling the opposite of depressed: that probably had to do with the fact that I knew I was leaving my previous job at Mozilla and wanted to get as much done as possible. Sometimes a highly motivating life situation can keep it in abeyance. Nonetheless, it’s here now, again, and demands to be dealt with.
What does depression even mean? It’s a bit of a hard thing to pin down, exactly. But from the perspective of looking after my own well being, I don’t think I really need a definition. What matters are the symptoms, which I’d roughly express as:
Knowing what I should be doing but not able to do it
An increase in time spent scanning social media, news sites
Feelings of low self worth
Feelings that nothing really matters
Lack of creativity, improvisation
An increase in self-referential thinking (this post would be an example of that)
A few things I tend to try to diminish its effects:
Go outside, expose myself to sunlight. Explore nature.
Exercise
Meditate
Eat less carbohydrates, more vegetables and protein
Talk to friends
Do nice things for other people
I don’t feel like doing these things but I try with all my might, against all my will, to do them anyway. Any individual action might not do much: but the cumulative effect of doing all of the above seems to have an impact — or at least that’s what I tell myself.
And yet despite my best efforts, it’s not always enough. I do all of the things in the second list, and yet still find myself suffering in all the ways described by the first. What do I do then?
I try to understand that there really isn’t an escape from unpleasant feeling, and that it’s just part of life: glorious and beautiful in its complexity. I try to be curious about what’s going on, even if I think it’s all happened before. If that’s not possible, I at least try to be present with it. That’s all I can do.
Just wanted to type up a couple of notes about working with Sphinx (the python documentation generator) inside a monorepo, an issue I’ve been struggling with (off and on) at Voltus since I started. I haven’t seen much written about this topic despite (I suspect) it being a reasonably frequent problem.
In general, there’s a lot to like about Sphinx: it’s great at handling deeply nested trees of detailed documentation with cross-references inside a version control system. It has local search that works pretty well and some themes (like readthedocs) scale pretty nicely to hundreds of documents. The directives and roles system is pretty flexible and covers most of the common things one might want to express in technical documentation. And if the built-in set of functionality isn’t enough, there’s a wealth of third party extension modules. My only major complaint is that it uses the somewhat obscure restructuredText file format by default, but you can get around that by using the excellent MyST extension.
Unfortunately, it has a pretty deeply baked in assumption that all documentation for your project lives inside a single subfolder. This is fine for a small repository representing a single python module, like this:
In a monorepo, you usually want to include a module’s documentation inside its own directory. This allows you to use your code ownership constraints for documentation, among other things.
The naive solution would be to create a sphinx site for every single one of these submodules. This is what happened at Voltus and I don’t recommend it. For a large monorepo you’ll end up with dozens, maybe hundreds of documentation “sites”. Under this scenario, discoverability becomes a huge problem: no longer can you rely on tables of contents and the built-in search to discover content: you just have to “know” where things live. I’m more than nine months in here and I’m still discovering new documentation.
It would be much better if we could somehow collect documentation from other parts of the repository into a single site. Is this possible? tl;dr: Yes. There’s a few solutions, each with their pros and cons.
The obvious solution that doesn’t work
The most obvious solution here is to create a symbolic link inside your documentation directory, say the following:
Unfortunately, this doesn’t work. ☹️ Sphinx doesn’t follow symbolic links.
Solution 1: Just copy the files in
The most obvious solution is to just copy the files from various parts of the monorepo into place, as part of the build system. Mozilla did this for Firefox, with the moztreedocs system.
The results look pretty good, but this is a bespoke solution. Aside from general ideas, there’s no way I’m going to be able to apply anything in moztreedocs to Voltus’s monorepo (which is based on a completely different build system). And being honest, I’m not sure if the 40+ hour (estimated) effort to reimplement it would be a good use of time compared to other things I could be doing.
This is a limited form of embedding: it won’t let you import an entire directory of markdown files. But if your submodules mostly just include content in the form of a README.md (or similar), it might just be enough. Just create a directory for these files to live (say services) and slot them in:
I’m currently in the process of implementing this solution inside Voltus. I have optimism that this will be a big (if incremental) step up over what we have right now. There are obviously limits, but you can cram a lot of useful information in a README. As a bonus, it’s a pretty nice marker for those spelunking through the source code (much more so than a forest of tiny documentation files).
Solution 3: Sphinx Collections
This one I just found about today: Sphinx Collections is a small python module that lets you automatically import entire directories of files into your sphinx tree, under a _collections module. You configure it in your top-level conf.py like this:
At this point, submodule-a’s documentation should be available under http://<my doc domain>/_collections/submodule-a/index.html
Pretty nifty. The main downside I’ve found so far is that this doesn’t play nicely with the Edit on GitHub links that the readthedocs theme automatically inserts (it thinks the files exist under _collections), but there’s probably a way to work around that.
I plan on investigating this approach further in the coming months.
The 90 day mark just passed at my new gig at Voltus, feels like a good time for a bit of self-reflection.
In general, I think it’s been a good change and that it was the right time to leave Mozilla. Since I left, a few people have asked me why I chose to do so: while the full answer is pretty complicated (these things are never simple!), I think it does ultimately come down to wanting to try something new after 10+ years. I’ve accumulated a fair amount of expertise in web development and data engineering and I wanted to see if I could apply them to a new area that I cared about— in this case, climate change and the energy transition.
Voltus is a much younger and different company than Mozilla was, and there’s no shortage of things to learn and do. Energy markets are a rather interesting technical domain to work in— a big intersection between politics, technology, and business. Lots of very old and very new things all at once. As a still-relatively young company, there is definitely more of a feeling that it’s possible to shape Voltus’s culture and practices, which has been interesting. There’s a bit of a balancing act between sharing what you’ve learned in previous roles while having the humility to recognize that there’s much you still don’t understand in a new workplace.
On the downside, I have to admit that I do miss being able to work in the open. Voltus is currently in the process of going public, which has made me extra shy about saying much of anything about what I’ve been working on in a public forum.
To some extent I’ve been scratching this itch by continuing to work on Irydium when I have the chance. I’ve done up a few new releases in the last couple of months, which I think have been fairly well received inside my very small community of people doing like-minded things. I’m planning on attending (at least part of) a pyodide sprint in early May, which I think should be a lot of fun as well as an opportunity to push browser-based data science forward.
I’ve also kept more of a connection with Mozilla than I thought I would have: some video meetings with former colleagues, answering questions on Element (chat.mozilla.org), even some pull requests where I felt like I could make a quick contribution. I’m still using Firefox, which has actually given me more perspective on some problems that people at Mozilla might not experience (e.g. this screensharing bug which you’d only see if you’re using a WebRTC-based video conferencing solution like Google Meet).
That said, I’m not sure to what extent this will continue: even if the source code to Firefox and the tooling that supports it is technically “open source”, outsiders like myself really have very limited visibility into what Mozilla is doing these days. This makes it difficult to really connect with much of what’s going on or know how I might be able to contribute. While it might be theoretically possible to join Mozilla’s Slack (at least last I checked), that feels like a rabbit hole I’d prefer not to go down. While I’m still interested in supporting Mozilla’s mission, I really don’t want more than one workplace chat tool in my life: there’s a lot of content there that is no longer relevant to me as a non-employee and (being honest) I’d rather leave behind. There’s lots more I could say about this, but probably best to leave it there: I understand that there’s reasons why things are the way they are, even if they make me a little sad.
I’ve decided to leave Mozilla as an employee: my last day will be December 31st, 2021.
It’s hard to overstate the impact Mozilla has had on my life. In particular, I’m grateful for all the interactions I’ve had with the community: the opportunity to build technology for the public good with people around the world was unique and I’m really going to miss it.
Looking back over the past 10 years, I’m feeling pretty good about the impact I had through building better developer and data tooling: mozregression, Perfherder, Iodide and the Glean Dictionary stand out as particular highlights. Thanks to everyone who worked on those things with me! I am because we are.
Last Lecture
It’s become traditional in Data @ Mozilla for the person leaving to give a last lecture on their way out. I decided to give a talk on a specific area of focus for me over the last couple of years: documentation.
I’m not sure how comprehensible it will be to people outside of my particular context at Mozilla, but it seemed fitting to post it publically regardless.
I’m more convinced than ever that documentation is one of the keys to empowering people to make better decisions with data (no matter what their job title). I hope my efforts here have been helpful.
Joining the Community
After having spent a good chunk of energy on making it possible for people outside the Mozilla Corporation to contribute to our projects, I’m looking forward to seeing what it’s like on the other side of the fence.
I’m not sure right now exactly how active I’ll be, but I plan on sticking around on Matrix and Bugzilla, at least a little bit. If there’s anything I can help you with, feel free to reach out!
Last summer, I took a 6-week sabbatical from my job to attend a virtual “programmers retreat” at the Recurse Center. I thought I’d write up some notes on the experience, with a particular lens towards what makes an environment suited towards learning, innovation, and personal growth.
Some context: I’m currently working as a software engineer at Mozilla, building out our data pipeline and analysis tooling. I’ve been at my current position for more than 10 years (my “anniversary” actually passed while I was out). I started out as a senior engineer in 2011, and was promoted to staff engineer in 2016. In tech-land, this is a really long tenure at a company. I felt like it was time to take a break from my day-to-day, explore some new ideas and concepts, and hopefully expose myself to a broader group of people in my field.
My original thinking was that I would mostly be spending this time building out an interactive computation environment I’ve been working on called Irydium. And I did quite a bit of that. However, I think the main thing I took away from this experience was some insight on what makes a remote environment for knowledge work really “click”. In particular, what makes somewhere feel psychologically safe, and how this feeling allows us to innovate and do our best work.
While the Recurse Center obviously has different goals than an organization that builds and delivers consumer software, I do think there are some things that it does that could be applied to Mozilla (and, likely, many other tech workplaces).
What is the Recurse Center?
Most succinctly, the Recurse Center is a “writer’s retreat for programmers”. It tries to provide an environment conducive to learning and creativity, an opportunity to refine your craft and learn new things, both from the act of programming itself and from interactions with the other like-minded people attending. The Recurse Center admits a wide variety of people, from those who have only been through a coding bootcamp to those who have been in the industry many years, like myself. The main admission criteria, from what I gather, are curiosity and friendliness.
Once admitted, you do a “batch”— either a mini (1 week), half-batch (6 weeks), or a full batch (12 weeks). I did a half-batch.
How does it work (during a global pandemic)?
The Recurse experience used to be entirely in-person, in a space in New York City - if you wanted to go, you needed to move there at least temporarily. Obviously that’s out the window during a Global Pandemic, and all activities are currently happening online. This was actually pretty ideal for me at this point in my life, as it allowed me to participate entirely remotely from my home in Hamilton, Ontario, Canada (near Toronto).
There’s a few elements that make “Virtual RC” tick:
A virtual space (pictured below) where you can see other people in your cohort. This is particularly useful when you want to jump into a conference room.
A shared “calendar” where people can schedule events, either adhoc (e.g. a one off social event, discussing a paper) or on a regular basis (e.g. a reading group)
A zulip chat server (which is a bit like Slack) for adhoc conversation with people in your cohort and alumni. There are multiple channels, covering a broad spectrum of interests.
Why does it work?
So far, what I’ve described probably sounds a lot like any remote tech workplace during the pandemic… and it sort of is! In some ways, my schedule and life while at Recurse didn’t feel all that different from my normal day-to-day. Wake up in the morning, drink coffee, meditate, work for roughly 8 hours, done. Qualitatively, however, my experience at Recurse felt unusually productive, and I learned a lot more than I expected to: not just the core stuff related to Irydium, but also unexpected new concepts like CRDTs, product design, and even how visual studio code syntax highlighting works.
What made the difference? Certainly, not having the normal pressures of a workplace helps - but I think there’s more to it than that. The way RC is constructed reinforces a sense of psychological safety which I think is key to learning and growth.
What is psychological safety and why should I care?
Psychological safety is a bit of a hot topic these days and there’s a lot of discussion about in management circles. I think it comes down to a feeling that you can take risks and “put yourself out there” without fear that you’ll be ignored, attacked, or ridiculed.
Why is this important? I would argue, because knowledge work is about building understanding — going from a place of not understanding to understanding. If you’re working on anything at all innovative, there is always an element of the unknown. In my experience, there is virtually always a sense of discomfort and uncertainty that goes along with that. This goes double when you’re working around and with people that you don’t know terribly well (and who might have far more experience than you). Are they going to make fun of you for not knowing a basic concept or for expressing an idea that’s “so wrong I don’t even know where to begin”? Or, just as bad, will you not get any feedback on your work at all?
In reality, except in truly toxic environments, you’ll rarely encounter outright abusive behaviour. But the isolation of remote work can breed similar feelings of disquiet and discomfort over time. My sense, after a year of working “hardcore” remote in COVID times, is that our normal workplace rituals of meetings, “stand ups”, and discussions over Slack don’t provide enough space for a meaningful sense of psychological safety to develop. They’re good enough for measuring progress towards agreed-upon goals but a true sense of belonging depends on less tightly scripted interactions among peers.
How the Recurse environment creates psychological safety
But the environment I described above isn’t that different from a workplace, is it? Speaking from my own experience, my coworkers at Mozilla are all pretty nice people. There’s also many channels for informal discussion at Mozilla, and of course direct messaging is always available (via Slack or Matrix). And yet, I still feel there is a pretty large gap between the two experiences. So what makes the difference? I’d say there were three important aspects of Recurse that really helped here: social rules, gentle prompts, and a closed space.
Social rules
There’s been a lot of discussion about community participation guidelines and standards of behaviour in workplaces. In general, these types of policies target really egregious behaviour like harassment: this is a pretty low bar. They aren’t, in my experience, sufficient to actually create an environment that actually feels safe.
The Recurse Center goes over and above a basic code of conduct, with four simple social rules:
No well-actually’s: corrections that aren’t relevant to the point someone was trying to make (this is probably the rule we’re most heavily conditioned to break).
No feigned surprise: acting surprised when someone doesn’t know something.
No backseat driving: lobbing advice from across the room (or across the online chat) without really joining or engaging in a conversation.
No subtle -isms: subtle expressions of racism, sexism, ageism, homophobia, transphobia and other kinds of bias and prejudice.
These rules aren’t “commandments” and you’re not meant to feel shame for violating them. The important thing is that by being there, the rules create an environment conducive to learning and growth. You can be reasonably confident that you can bring up a question or discussion point (or respond to one) and it won’t lead to a bad outcome. For example, you can expect not to be made fun of for asking what a UNIX socket is (and if you are, you can tell the person doing so to stop). Rather than there being an unspoken rule that everyone should already know everything about what they are trying to do, there is a spoken rule that states it’s expected that they don’t.
Working on Irydium, there’s an infinite number of ways I can feel incompetent: this is a requirement when engaging with concepts that I still don’t feel completely comfortable with: parsers, compilers, WebAssembly… the list goes on. Knowing that I could talk about what I’m working on (or something I’m interested in) and that the responses I got would be constructive and directed to the project, not the person made all the difference.1
Gentle prompts
The thing I loved the most about Recurse were the gentle prompts to engage with other people, talk about your work, and get help. A few that I really enjoyed during my time there:
The “checkins” channel. People would post what’s going on with their time at RC, their challenges, their struggles. Often there would be little snippits about people’s lives in there, which built to a feeling of community.
Hack & Tell: A weekly event where a group of us would get together in a Zoom room, talk about working on or building something, then rejoin the chat an hour later to show off what we accomplished.
Coffee Chats: A “coffee chat” bot at RC would pair you with other people in your batch (or alumni) on a cadence of your choosing. I met so many great people this way!
Weekly Presentations: At the end of each week, people would sign up to share something that they were working on our learned.
… and I could go on. What’s important are not the specific activities, but their end effect of building connectedness, creating opportunities for serendipitous collaboration and interaction (more than one discussion group came out of someone’s checkin post on Zulip) and generally creating an environment well-suited to learning.
A (semi) closed space
One of the things that makes the gentle prompts above “work” is that you have some idea of who you’re going to be interacting with. Having some predictability about who’s going to see what you post and engage with you (that they were vetted by RC’s interview process and are committed to the above-mentioned social rules) gives you some confidence to be vulnerable and share things that you might be reluctant to otherwise.
Those who have known me for a while will probably see the above as being a bit of departure from what I normally preach: throughout my tenure at Mozilla, I’ve constantly pushed the people I’ve worked with to do more work in public. In the case of a product like Firefox, which touches so many people, I think open and transparent practices are absolutely essential to building trust, creating opportunity, and ensuring that our software reflects a diversity of views. I applied the same philosophy to Irydium’s development while I was at the Recurse Center: I set up a public Matrix channel to discuss the project, published all my work on GitHub, and was quite chatty about what I was working on, both in this blog and on Twitter.
The key, I think, is being deliberate about what approach you take when: there is a place for both public and private conversations about what we work on. I’m strongly in favour of open design documents, community calls, public bug trackers and open source in general. But I think it’s also pretty ok to have smaller spaces for learning, personal development, and question asking. I know I strongly appreciated having a smaller group of people that I could talk to about ideas that were not yet fully formed: you can always bring them out into the open later. The psychological risk of working in public can be mitigated by the psychological safety that can be developed within an intentional community.
Bringing it back
Returning to my job, I wondered if it might be possible to bring some of what I described above back to Mozilla? Obviously not everything would be directly transferable: Mozilla has its own mission and goals, and there are pressures that exist in a workplace that do not exist in an environment purely directed at learning. Still, I suspected that there was something we could do here. And that it would be worth doing, not just to improve the felt experience of the people here (though that would be reason enough) but also to get more feedback on our work and create more opportunities for collaboration and innovation.
I felt like trying to do something inside our particular organization (Data Engineering and Data Science) would be the most tractable initial step. I talked a bit about my experience with Will Kahn-Green (who has been at Mozilla around the same length of time as I have) and we came up with what we called the “Data Neighbourhood” project: a set of grassroots micro-initiatives to increase our connectedness as a group. As an organization directed primarily at serving other parts of Mozilla, most of our team’s communication is directed outward. It’s often hard to know what everyone else is up to, where they’re struggling, and how we could help each other out. Attacking that problem directly seemed like the best place to start.
The first experiment we tried was a “data checkins” channel on Slack, a place for people to talk informally about their work (or life!). I explicitly set it up with a similar set of social rules as outlined above and tried to emphasize that it was a place to talk about how things are going, rather than a place to report status to your manager. After a somewhat slow start (the initial posts were from Will, myself, and a few other people from Data Engineering who had been around for a long time) we’re beginning to see engagement from others, including some newer people I hadn’t interacted with much before. There’s also been a few useful threads of conversations across different sub-teams (for example, a discussion on how we identify distinct versions of Firefox for iOS) that likely would not have happened without the channel.
Since then, others have tried a few other things in the same vein (an adhoc coffee chat pairing bot, a “writing help” channel) and there are some signs of success. There’s clearly an appetite for new and better ways for us to relate to each other about the work we’re doing, and I’m excited to see how these ideas evolve over time.
I suspect there are limits to how psychologically safe a workplace can ever feel (and some of that is probably outside of any individual’s control). There are dynamics in a workplace which make applying some of Recurse’s practices difficult. In particular, a posture of “not knowing things is o.k.” may not apply perfectly to a workplace where people are hired (and promoted) based on perceived competence and expertise. Still, I think it’s worth investigating what might be possible within the constraints of the system we’re in. There are big potential benefits, for our creative output and our well-being.
Many thanks to Jenny Zhang, Kathleen Beckett, Joe Trellick, Taylor Phebillo and Vaibhav Sagar, and Will Kahn-Greene for reviewing earlier drafts of this post
Getting back into the swing of things at Mozilla after my extended break. I’m currently working on enhancing and extending Looker support for Glean-based applications, which eventually led me back to working on bigquery-etl, our framework for creating derived datasets in our data lake.
I spent some time working on improving the initial developer experience of bigquery-etl early this year, so I figured it would be no problem to get going again despite an extended hiatus from it (I think it’s probably been ~2–3 months since I last touched it). Unfortunately the first thing I got after creating a fresh virtual environment (to pick up the new dependency updates) was this exciting looking error:
wlach@antwerp bigquery-etl % ./bqetl --help
Traceback (most recent call last):
...
File "/Users/wlach/src/bigquery-etl/venv/lib/python3.9/site-packages/google/cloud/bigquery_v2/types/__init__.py", line 16, in <module>
from .encryption_config import EncryptionConfiguration
File "/Users/wlach/src/bigquery-etl/venv/lib/python3.9/site-packages/google/cloud/bigquery_v2/types/encryption_config.py", line 26, in <module>
class EncryptionConfiguration(proto.Message):
File "/Users/wlach/src/bigquery-etl/venv/lib/python3.9/site-packages/proto/message.py", line 200, in __new__
file_info = _file_info._FileInfo.maybe_add_descriptor(filename, package)
File "/Users/wlach/src/bigquery-etl/venv/lib/python3.9/site-packages/proto/_file_info.py", line 42, in maybe_add_descriptor
descriptor=descriptor_pb2.FileDescriptorProto(
TypeError: descriptor to field 'google.protobuf.FileDescriptorProto.name' doesn't apply to 'FileDescriptorProto' object
What I did
Since we have pretty decent continuous integration at Mozilla, when I see an error like this I am usually pretty sure it’s some kind of strange interaction between my local development environment and whatever dependencies we’ve specified for the repository in question. Usually these problems are pretty easy to solve.
First thing I tried was to type the error into Google, to see if this had come up for anyone else before. I tried several variations of TypeError: descriptor to field and FileDescriptorProto and nothing really turned up. This strategy almost always turns up something. When it doesn’t it usually indicates that something pretty strange is happening.
To see if this was a strange problem particular to us, I asked on our internal channel but no one had offhand seen or heard of this error either. One of my colleagues (who had a working setup on a Mac, the same environment I was using) suggested I set up pyenv to isolate my development environment, which was a good idea but did not seem to solve the problem: both Python 3.8 and 3.9 installed via pyenv ran into the exact same issue.
After flailing around trying a number of other failed approaches (maybe I need to upgrade the version of virtualenv that we’re using?), I broke down and looked harder at the error itself. It seemed to be some kind of typing error in Google’s protobuf library, which google-cloud-bigquery is calling. If this sort of thing was happening to everyone, we probably would have seen it happening more broadly. So my guess, again, was that it was happening due to an obscure interaction between some variable on my machine and this particular combination of dependencies.
At this point, I systematically went through our set of python dependencies to see what might be the matter. For the most part, I found nothing surprising or suspicious. google-api-core was at the latest version, as was google-cloud-bigquery. However, I did notice that the version of protobuf we were using was a little older (3.15.8 when the latest “official” version on pypi was 3.17.3).
It seemed like a longshot that the problem was there, but it seemed like upgrading the dependency was worth a try just in case. So I bumped the version of protobuf to the latest version in my local checkout (pip install protobuf==3.17.3)…
… and sure enough, after doing so, the problem was fixed and ./bqetl --help started working again:
wlach@antwerp bigquery-etl % ./bqetl --help
Usage: bqetl [OPTIONS] COMMAND [ARGS]...
CLI tools for working with bigquery-etl.
...
After doing so, I did up a quick pull request and the problem is now fixed, at least for me.
It’s a bit unfortunate that dependabot (which we have configured for this repository) didn’t send an update for protobuf, which would have fixed this problem earlier.1 It seems like it’s not completely reliable for python packages, for whatever reason: I have also noticed this problem with mozregression.
I suspect (though can’t confirm) that the problem here is a backwards-incompatible change made to either protobuf or one of the packages that uses it. However, the nature of the incompatibility seems subtle: bigquery-etl works fine with the old set of dependencies we run in continuous integration and it appears to only come up in specific circumstances (i.e. mine). Unfortunately, I need to get back to what I was actually planning to work on and don’t have time to unwind the rather set of complex interactions going on here. Maybe later!
What I would have done differently
This kind of illustrates (again) to me that while some shortcuts and heuristics can save a bunch of time and mental effort (Googling things all the time is basically standard practice in the industry at this point), sometimes you really just need to start a little closer at the problem to find a solution. I was hesitant to do this in this case because I’m never sure where those kinds of rabbit holes are going to take me (e.g. I spent several days debugging a bad interaction between Kubernetes and our airflow cluster in late 2019 with not much to show for the effort), but often all it takes is understanding the general shape of the problem to move you to a quick solution.
Other lessons
Here’s a couple of other things this experience reinforced for me (these are more subjective, take them or leave them):
Local development environments are kind of a waste of time. The above work took me several hours and it’s going to result in ~zero user-visible improvements for anyone outside of Mozilla Data Engineering. I’m excited about the potential productivity improvements that might come from using tools like GitHub Codespaces.
While I can’t confirm this was the source of the problem in this particular case, in general backwards compatibility on every level is super important when your software has broad reach and doubly so if it’s a widely-used dependency of other software (and is thus hard to reason about in isolation). In these cases, what seems like a trivial change (e.g. improving the type signatures inside a Python library) can squander many hours of people’s time if you’re not careful. Backwards-incompatible changes, however innocuous they may seem, should always invoke a major version bump.
Likewise, bugs in software that have broad usage (like dependabot) can have big downstream impacts. If dependabot’s version bumping for python was more reliable, we likely wouldn’t have had this problem. The glass-half-full interpretation of this is that fixing these types of issues would have an outsized benefit for the commons.
As an aside, the main reason we use dependabot and aggressively update packages like google-api-core is due to a bug in pip. ↩
One of my main goals with Irydium is to allow it to be a part of as many data science and engineering workflows as possible (including ones I haven’t thought of). Yes, like Iodide and other products, I am (slowly) building a web-based interface for building and sharing dashboards, reports, and similar things. However, I also want to fully support local and command-line based workflows. Beyond the obvious utility of being able to use your favorite text-editor to create documents, this also opens up the possibility of combining Irydium with other tools and workflows. For a slightly longer exposition on why this is desirable, I would highly recommend reading Ryan Harter’s post on the subject: Don’t make me code in your text box.
Using the irydium template
To make getting started easier, I just created an irydium-template: a simple GitHub repository which contains a minimal markdown document (a big mac index visualization) which you can use as a base, as well as a bit of npm scaffolding to get you up and running quickly. To check it out via the console, I recommend using degit (the tool of choice for such things in the Svelte community):
npx degit git@github.com:irydium/irydium-template.git my-notebook
npm install
npm run dev
This will create a webserver which renders the document (index.md) at port 3000, along with some debugging options. As you edit and save the document, the site should update automatically.
Publishing your work
When you’re happy with the results, you can create a static version of the site (an index.html file) by running npm run build. You can publish this via whatever you like: GitHub pages, Netlify / Vercel or… my new favorite service, surge.sh. Surge provides a really simple hosting service for hosting static sites and works great with Irydium. Installing and running it locally is two commands:
npm install -g surge
surge
Surge will prompt you for an email and a password, then will automatically publish your site at a unique URL. As an example, I published a site for the above template: few-blade.surge.sh
Interested in chatting more about this? Feel free to reach out on the Irydium Gitter chat.
Some quick updates on where Irydium is at, roughly a week-and-a-half before my mini-sabbatical at the Recurse Centre ends.
JupyterBook and MyST
I’d been admiring JupyterBook from afar for some time: their project philosophy appealed to me greatly. In particular, the MyST extensions to markdown seemed like a natural fit for this project and a natural point of collaboration and cross-pollination. A couple of weeks ago, I finally got in touch with some people working on that project, which prompted a few small efforts:
I’ve become convinced that building on top of MyST is right for both Irydium and the larger community. Increasing Irydium’s support for MyST is tracked in irydium/irydium#123.
Using Irydium to build Irydium
I’ve been spending a fair bit of time thinking of how to ma ke it easier for people to build Irydium documents through composition of existing documents. Landed the first pieces of this. The first is the ability to “import” a code chunk from another irydium document. There’s a few examples of this in the new components section of irydium.dev:
In a sense this allows you to define a reusable piece of code along with both documentation and usage examples. I think this concept will be particularly useful for supporting language plugins (which I will write about in an upcoming post).
It’s a real project now
I spent a bit of time last week doing some community gardening. I still consider Irydium an “experiment” but I’d like to at least open up the possibility of it being something larger. To help make that happen, I started working on some basic project governance pieces, namely:
We have a code of conduct and contributing guidelines. I opted to go for the Contributor Covenant, which seems to be a good minimal viable social contract. I considered something proposing something more comprehensive (like the Rust Code of Conduct), but I felt that’s something for a group of people to discuss and debate, should the time come where Irydium is more than a one-person show. For now, I’ll do my best to make sure that everyone in Irydium’s orbit has a good experience.
There’s a proper issues list, including some “good first bugs” for people to look at (shout out to @m-clare for submitting the first PR to Irydium!)
There’s not a ton of time left at RC, so some of these things may have to be done in my spare time after the batch ends. That said, here’s my near-term roadmap:
Add support for code chunks to output content directly to the DOM (currently the only way to output to an Irydium document is through a Svelte component). This will be particularly important for Python support, where people expect the output of a cell running altair or matplotlib to display directly in the document (as they do in Jupyter). Tracked in irydium/irydium#122.
Integrate ellx.io’s next-generation JavaScript bundler, tokamak. This should make building irydium documents much more robust and error proof and paves the way to further improvements. Special shout-out to the ellx developers for being so friendly and open to collaboration: ellx is a novel approach to application development and definitely worth checking out if you haven’t already. Tracked in irydium/irydium#125.
Finish and document support for language plugins (and make another blog post especially about them, they’re cool!). Tracked in irydium/irydium#144.
Yesterday (July 11, 2021) was the 10 year anniversary of starting at the Mozilla Corporation. My life has changed a ton in those years: in that time I ended a marriage, changed the city in which I live two times, and took up religion1. Mozilla has also changed pretty drastically in my time here, especially in the last year.
Yet somehow I’m still at it, for more or less for the same reasons that led me to accept my initial offer to join the A-team.2 The Internet has the immense potential to be a force for individual empowerment and yet more than ever, we see this technology used to consolidate unchecked power, spread misinformation, and generally exploit people. Mozilla is not perfect (no organization is: 10 years anywhere will teach you that), but it’s one of the few remaining counter-forces to these accelerating trends. While I’m currently taking a bit of a break to explore some stuff on my own, I am looking forward to getting back to work on the mission when I return in mid-August.