Was feeling a bit restless today, so I decided to build something on a theme I’d been thinking of since, oh gosh, I guess high school — an ecosystem simulation.
My original concept for it had three different types of entities — grass, rabbits, and foxes wandering around in a fixed environment. Each would eat the previous and try to reproduce. Both the rabbits and foxes need to continually eat to survive, otherwise they will die. The grass will just grow unprompted. I think I may have picked up the idea from elsewhere, but am not sure (it’s been nearly 17 years after all).
I suppose the urge to do this comes from my fascination with the concepts of birth, death, and rebirth. Conway’s game of life is probably the most famous computer representation of this sort of theme, but I always found the behavior slightly too contrived and simple to be deeply satisfying to me (at least from the point of view of representing this concept: the game is certainly interesting for other reasons). Conway’s simulation is completely deterministic and only has one type of entity, the cell. There’s an element of randomness and hierarchy in the real world, and I wanted to represent these somehow.
It was remarkably easy to get things going using my preferred toolkit for these things (Javascript and Canvas) — about 3 hours to get something on the screen, then a bunch of tweaking until I found the behavior I wanted. Either I’m getting smarter or the tools to build these things are getting better. Probably the latter.
In the end, I only wound up having rabbits and grass in my simulation in this iteration and went for a very abstract representation of what was going on (colored squares for everything!). It turns out that no more than that was really necessary to create something that held my interest. Here’s a screenshot (doesn’t really do it justice):
If you’d like to check it out for yourself, I put a copy on my website here. It probably requires a fairly fancy computer to run at a decent speed (I built it using a 2014 MacBook Pro and made very little effort to optimize it). If that doesn’t work out for you, I put up a video capture of the simulation on youtube.
The math and programming behind the simulation is completely arbitrary and anything but rigorous. There are probably a bunch of bugs and unintended behaviors. This has all probably been done a million times before by people I’ve never met and never will. I’m ok with that.
Update: Source now on github, for those who want to play with it and submit pull requests.
So I went to PyCon 2015. While I didn’t leave quite as inspired as I did in 2014 (when I discovered iPython), it was a great experience and I learned a ton. Once again, I was incredibly impressed with the organization of the conference and the diversity and quality of the speakers.
Since Mozilla was nice enough to sponsor my attendance, I figured I should do another round up of notable talks that I went to.
Technical stuff that was directly relevant to what I work on:
- To ORM or not to ORM (Christine Spang): Useful talk on when using a database ORM (object relational manager) can be helpful and even faster than using a database directly. I feel like there’s a lot of misinformation and FUD on this topic, so this was refreshing to see. video slides
- Debugging hard problems (Alex Gaynor): Exactly what it says — how to figure out what’s going on when things aren’t behaving as they should. Great advice and wisdom in this one (hint: take nothing for granted, dive into the source of everything you’re using!). video slides
- Python Performance Profiling: The Guts And The Glory (Jesse Jiryu Davis): Quite an entertaining talk on how to properly profile python code. I really liked his systematic and realistic approach — which also discussed the thought process behind how to do this (hint: again it comes down to understanding what’s really going on). Unfortunately the video is truncated, but even the first few minutes are useful. video
Non-technical stuff:
- The Ethical Consequences Of Our Collective Activities (Glyph): A talk on the ethical implications of how our software is used. I feel like this is an under-discussed topic — how can we know that the results of our activity (programming) serves others and does not harm? video
- How our engineering environments are killing diversity (and how we can fix it) (Kate Heddleston): This was a great talk on how to make the environments in which we develop more welcoming to under-represented groups (women, minorities, etc.). This is something I’ve been thinking a bunch about lately, especially in the context of expanding the community of people working on our projects in Automation & Tools. The talk had some particularly useful advice (to me, anyway) on giving feedback. video slides
I probably missed out on a bunch of interesting things. If you also went to PyCon, please feel free to add links to your favorite talks in the comments!
Just wanted to give another quick Perfherder update. Since the last time, I’ve added summary series (which is what GraphServer shows you), so we now have (in theory) the best of both worlds when it comes to Talos data: aggregate summaries of the various suites we run (tp5, tart, etc), with the ability to dig into individual results as needed. This kind of analysis wasn’t possible with Graphserver and I’m hopeful this will be helpful in tracking down the root causes of Talos regressions more effectively.
Let’s give an example of where this might be useful by showing how it can highlight problems. Recently we tracked a regression in the Customization Animation Tests (CART) suite from the commit in bug 1128354. Using Mishra Vikas‘s new “highlight revision mode” in Perfherder (combined with the revision hash when the regression was pushed to inbound), we can quickly zero in on the location of it:
It does indeed look like things ticked up after this commit for the CART suite, but why? By clicking on the datapoint, you can open up a subtest summary view beneath the graph:
We see here that it looks like the 3-customize-enter-css.all.TART entry ticked up a bunch. The related test 3-customize-enter-css.half.TART ticked up a bit too. The changes elsewhere look minimal. But is that a trend that holds across the data over time? We can add some of the relevant subtests to the overall graph view to get a closer look:
As is hopefully obvious, this confirms that the affected subtest continues to hold its higher value while another test just bounces around more or less in the range it was before.
Hope people find this useful! If you want to play with this yourself, you can access the perfherder UI at http://treeherder.mozilla.org/perf.html.
For the past few months I’ve been working on a sub-project of Treeherder called Perfherder, which aims to provide a workflow that will let us more easily detect and manage performance regressions in our products (initially just those detected in Talos, but there’s room to expand on that later). This is a long term project and we’re still sorting out the details of exactly how it will work, but I thought I’d quickly announce a milestone.
As a first step, I’ve been hacking on a graphical user interface to visualize the performance data we’re now storing inside Treeherder. It’s pretty bare bones so far, but already it has two features which graphserver doesn’t: the ability to view sub-test results (i.e. the page load time for a specific page in the tp5 suite, as opposed to the geometric mean of all of them) and the ability to see results for e10s builds.
Here’s an example, comparing the tp5o 163.com page load times on windows 7 with e10s enabled (and not):
[link]
Green is e10s, red is non-e10s (the legend picture doesn’t reflect this because we have yet to deploy a fix to bug 1130554, but I promise I’m not lying). As you can see, the gap has been closing (in particular, something landed in mid-January that improved the e10s numbers quite a bit), but page load times are still measurably slower with this feature enabled.
Lots of movement in mozregression (a tool for automatically determining when a regression was introduced in Firefox by bisecting builds on ftp.mozilla.org) in the last few months. Here’s some highlights:
- Support for win64 nightly and inbound builds (Kapil Singh, Vaibhav Agarwal)
- Support for using an http cache to reduce time spent downloading builds (Sam Garrett)
- Way better logging and printing of remaining time to finish bisection (Julien Pagès)
- Much improved performance when bisecting inbound (Julien)
- Support for automatic determination on whether a build is good/bad via a custom script (Julien)
- Tons of bug fixes and other robustness improvements (me, Sam, Julien, others)
Also thanks to Julien, we have a spiffy new website which documents many of these features. If it’s been a while, be sure to update your copy of mozregression to the latest version and check out the site for documentation on how to use the new features described above!
Thanks to everyone involved (especially Julien) for all the hard work. Hopefully the payoff will be a tool that’s just that much more useful to Firefox contributors everywhere.
Just a quick note that my campaign to raise funds for MSF was successful. From friends and family, coworkers, and my extended network I managed to raise the sum of $3106. This was a bit above my maximum of $2500 that I said I would match, but what the hell:
So in total that means $6212 for Doctors without Borders. Not half bad for a week long campaign from someone as insignificant as myself!
Most of the people who donated wanted to remain anonymous, which is cool. But among those who were cool about being public, big thanks to: Julia Evans, Chris Atlee, Mike Conley, Ted Tibbetts, Paul and Eva Lachance, Alex Demarsh and Lauren Reid (if I’m missing someone who wants to be named, please let me know).
Incidentally, I’ve learned a bit more about Ebola over the past week, and I’m fairly sure that we’ll be ok here in Canada and the States given the fact that the virus isn’t actually that contagious (and that we have reasonable facilities for treating it in a safe manner, major screwups in Dallas notwithstanding). Liberia and other parts of Africa are still very much worth worrying about though, so I still 100% that starting this campaign was the right thing for me to do as an individual. Actually wiping out the disease will likely require government-level resources and intervention. Still waiting for a more serious resource commitment from Canada (cautiously optimistic about what’s happening with the States).
I’ve been appalled at the (lack of) response to the Ebola epidemic in Africa. We’re talking about a deadly disease which could infect more than a million people by January 2015 and explode out of control but the response from our governments has been pitiful, ranging from “inadequate” in the case of the United States to “pretty much nothing” from Canada.
Closer to home, my friends and acquaintances seem more interested in trading cute quips (“Want to get space on public transit? Pretend to have a conversation on your cellphone about how you just came home from Liberia!” LOL) and image captures from CNN and Fox News saying dumb things (“Is Ebola the ISIS of epidemics?”) than anything else.
I don’t really know how to emphasize how serious this situation is. Yes, there are a million causes and issues competing for people’s attention these days. But Ebola is an epidemic. Many, many people could die. Even more could suffer. In the extreme case, this could be the end of civilization as we know it. Some kind of action is called for.
I don’t have any medical expertise, otherwise I’d probably be offering to volunteer abroad. The most I have to offer is money and a very modest circle of influence. So be it, that will have to do. I’m not going to sit back and do nothing while this happens.
Doctors without Borders seem to be the non-profit organiztion who have been able to do the most to be able to respond to this situation (and they’ve generally gotten good reviews in the past as a no-nonsense humanitarian group responding to crises all over the world). For the next week, I will match dollar for dollar any contribution (of any amount) that anyone donates to this organization, up to a maximum of $2500 (total across all contributions). Just email me at wrlach@gmail.com with your donation and whether you’d like to remain anonymous.
At the end of this coming week (Sunday, October 19th), I’ll donate the matching funds to MSF and do up another blog post with the results and the names of the people who donated and consented to have their names published. The world needs to know that I’m not the only one that cares about this.
Over last few months, I discovered the joy that is CSS Flexbox, which solves the “how do I lay out this set of div’s in horizontally or vertically”. I’ve used it in three projects so far:
- Centering the timer interface in my meditation app, so that it scales nicely from a 320×480 FirefoxOS device all the way up to a high definition monitor
- Laying out the chart / sidebar elements in the Eideticker dashboard so that maximum horizontal space is used
- Fixing various problems in the Treeherder UI on smaller screens (see bug 1043474 and its dependent bugs)
When I talk to people about their troubles with CSS, layout comes up really high on the list. Historically, basic layout problems like a panel of vertical buttons have been ridiculously difficult, involving hacks involving floating divs and absolute positioning or JavaScript layout libraries. This is why people write articles entitled “Give up and use tables”.
Flexbox has pretty much put an end to these problems for me. There’s no longer any need to “give up and use tables” because using flexbox is pretty much just *like* using tables for layout, just with more uniform and predictable behaviour. They’re so great. I think we’re pretty close to Flexbox being supported across all the major browsers, so it’s fair to start using them for custom web applications where compatibility with (e.g.) IE8 is not an issue.
To try and spread the word, I wrote up a howto article on using flexbox for web applications on MDN, covering some of the common use cases I mention above. If you’ve been curious about flexbox but unsure how to use it, please have a look.
I just released mozregression 0.24. This would be a good time to note some of the user-visible fixes / additions that have gone in recently:
-
Thanks to Sam Garrett, you can now specify a different branch other than inbound to get finer grained regression ranges from. E.g. if you’re pretty sure a regression occurred on fx-team, you can do something like: mozregression --inbound-branch fx-team -g 2014-09-13 -b 2014-09-14
- Fixed a bug where we could get an incorrect regression range (bug 1059856). Unfortunately the root cause of the bug is still open (it’s a bit tricky to match mozilla-central commits to that of other branches) but I think this most recent fix should make things work in 99.9% of cases. Let me know if I’m wrong.
- Thanks to Julien Pagès, we now download the inbound build metadata in parallel, which speeds up inbound bisection quite significantly
If you know a bit of python, contributing to mozregression is a great way to have a high impact on Mozilla. Many platform developers use this project in their day-to-day work, but there’s still lots of room for improvement.
Over the past two weeks, I’ve been working a bit on the Treeherder front end (our interface to managing build and test jobs from mercurial changesets), trying to help get things in shape so that the sheriffs can feel comfortable transitioning to it from tbpl by the end of the quarter.
One thing that has pleasantly surprised me is just how easy it’s been to get going and be productive. The process looks like this on Linux or Mac:
git clone https://github.com/mozilla/treeherder-ui.git
cd treeherder-ui/webapp
./scripts/web-server.js
Then just load http://localhost:8000 in your favorite web browser (Firefox) and you should be good to go (it will load data from the actually treeherder site). If you want to make modifications to the HTML, Javascript, or CSS just go ahead and do so with your favorite editor and the changes will be immediately reflected.
We have a fair backlog of issues to get through, many of them related to the front end. If you’re interested in helping out, please have a look:
https://wiki.mozilla.org/Auto-tools/Projects/Treeherder#Bugs_.26_Project_Tracking
If nothing jumps out at you, please drop by irc.mozilla.org #treeherder and we can probably find something for you to work on. We’re most active during Pacific Time working hours.