Tuesday 31st October, 2017
We continued working on the Asset Management project for GDS this week; working from home, TOG Bloomsbury and Space4.
James was away on Thursday and Friday to visit the Isle of Wight.
Chris and I spent the week working on serving a subset of Whitehall assets from Asset Manager. James continued to work on decommissioning the use of NFS for storing Asset Manager assets.
We decided to descope the work of serving Whitehall assets from Asset Manager by focussing on organisation logos. We’d initially descoped the work to focus on non-access-controlled assets but that was still proving quite large. Descoping further should help us get an end-to-end slice working earlier and we’ll be able to use what we learn to help migrate the remaining assets.
Chris and I paired on tracking down a hairy bug in our new CarrierWave File object where, even after removing the file, it was still reported as being present. The fix was simply adding a method to our object but the tangle of code made it very hard to find. I wonder whether it’d be possible for CarrierWave to have some kind of tests that we could use to check whether our object satisfies the interface it’s expecting.
Fixing the hairy CarrierWave problem means that we now have Whitehall organisation logos being uploaded to and deleted from Asset Manager. All that remains is to upload existing organisation logos and switch the Nginx routing so that they’re being served from Asset Manager instead of Whitehall. Chris worked on uploading existing logos and I made the necessary Nginx and Whitehall changes so that we’re ready to switch once the existing logos have been uploaded.
James enabled cross-region replication on our production S3 asset bucket. We’re currently using Duplicity to back-up our production assets from NFS to S3 so switching to AWS native technology will allow us to remove another moving part from the GOV.UK stack.
James has also started investigating how to copy the production assets to both staging and integration overnight to mirror the current rsync’ing of assets between environments.
We arrived one morning to learn that one of our Nginx changes contained an error that prevented Nginx from starting! It was discovered because the integration environment is rebooted every night and Nginx failed to start when the machines came up. We found and fixed the problem quite quickly and James updated the puppet scripts so that they’ll fail fast if they detect a problem with the Nginx config in future.
We had our regular catch-up meeting with Daniel on Wednesday.
After a successful call with our new accountant, during which they answered all the questions we had about the accounts, we were able to get them submitted and our corporation tax paid. It was a relief to get this all done given the deadline is the end of October!
Changing our Articles of Association
James continued to read through the new Articles of Association that we discussed in week 453. We’re still hoping to have adopted them by the time we attend the CoTech retreat at Wortley Hall.
We combined our regular monthly drinks with the Space4 launch party on Thursday. It was really well attended and the handful of lightning talks I heard were all really good. Tom W joined us and so we ended up heading off early to a local pub for a bit of a catch-up before heading home.
Chris drafted a proposal to request some money from Solid Fund to help with funding the CoTech retreat to Wortley Hall and James posted a summary of our CoTech involvement to a thread on the discussion forum.
James investigated why our simple cashflow calculator started reporting that we had an additional two months reserves in the bank from one month to the next. While an additional two months sounds positive we were worried that the jump meant that something wasn’t quite right in our calculations. We don’t completely understand the reason for the jump but we’re fairly confident that the results are correct. We’ve agreed to pair on this task in future to try to ensure we’re all doing it in the same way.
James updated our website to avoid exceptions we were seeing when a certain crawler was requesting our content as
text/plain. The current fix is to reject these requests with a 406 Not Acceptable response but we wonder whether simply returning HTML in this case would be better.
That’s all for this week.
If you have any feedback on this article, please get in touch!
Historical comments can be found here.