1. I Asked AI to Rebase My Git Branch and Accidentally Discovered the Future (and Uncovered the Past)

    One of the things I initially missed most, when shifting from emacs to zed was magit, which has been my main git interface for IDK, let's say about 20 years đŸ€·.

    I'm a bit of a sloppy worker, but a meticulous git user, lots of atomic annotated commits, many linear branches, I'm cherry picking and rebasing, and stashing, and magit integrates all of this naturally into the normal flow.

    Zed has git integration, obviously, but it's entirely basic - pull, push, commit, branch, diff, that's about it. It does let you stage incrementally, which is most of the problem solved, and of course, there is a terminal integration, so you can pop a shell and use git CLI (like a farmer đŸ€Ș).

    It's OK, but clunky, obviously. A little bit of friction and dissonance in this slick world of modern native editors that were actually made this century. I think the kids are a lot looser with git than I am, and perhaps they have a point, but I like the discipline.

    Here comes the machine mind

    You probably figured out what's coming next, if you bothered to read this far.

    I've been using the zed LLM integration to write commit messages for a while - it's pretty well suited to that, given a bit of context, you can generate a draft commit message that summarises the changes, and it's you tweak and approve them before you apply. It's a pretty good example of the kind of low-hanging, small improvement, you can achieve with even simple model, precisely applied to a narrow context that involves generating prose. Smoothing out busywork.

    Obviously, zed is pretty agentic đŸ€ą, because that stupid word is all the rage these days. I guess you can open a chat box and ask your editor to vibe code your whole application. (Good luck with that, if you do, I think that's liable to create more work in integration than it saves in writing, maybe that's just me)

    I use the agents for a bit of boilerplate here and there - refactor this, replace these magic numbers with proper constants, redo this part to use an iterator, what is the type checker complaining about here, how do I configure the language server to disable a misfeature, again, it's not too bad at doing think kind of drone effort, and there is a small but appreciable productivity gain to be had.

    I cross the streams, and surprise myself

    This last week, I was suddenly inspired to cross the streams, and something interesting happened, surprising me enough to bother drafting a post on the topic. I don't really like to thought-leader, but occasionally something will delight me enough to want to share.

    I wanted to tidy up a messy WIP branch that collected a couple of different ideas in progress, (and had also coincided with me correctly figuring out how to enable auto-linting in zed, so I suddenly had a lot of aesthetic formatting corrections dropped suddenly into an already untidy sandbox)

    A minute or two fiddling with rebase in magit, but in zed...? Time to roll up my sleeves, and flex, and take out the ol' git pitchfork, or hitch up the rebase propagator to the reflog tractor (I do not really know what farmers do). But, wait. I wonder if...

    So I pull up the agent and ask it "can you run an interactive rebase on this branch please, and group all the white-space only changes into one commit, the other formatting changes into another, and then separate the removal of the obsolete class from the other feature work?"

    And, it kind of worked! It got stuck a couple of times, and I had to pop in and edit a couple of things, and I restarted it over with the order of the commits I wanted a little more explicitly instructed once I recognised where the conflicts were going to land, but I got the result I intended, certainly with no more fiddling about than I would have had to do if I'd been performing the task manually, maybe less? It's hard to measure, but I enjoyed the experience.

    One thing I definitely realised. It was less irritating. I was able to complete a disruptive, yet necessary chore, while in the middle of doing something more interesting, with much less context switching than performing it manually. And that got me thinking about interfaces a bit more. Chat bots are unquestionably a very ergonomic interface.

    Increasingly, I am starting to think that a key part of unlocking the value of LLMs, may be through thinking about them more as solutions for interface design problems.

    Considering git interfaces

    Git is a classic case. Git's interface sucks space balls, everyone knows it. I mean I know a few people who like it, and I'm happy for them (but I think they are weirdos). It does allow for a bunch of studly machismo and nerd flexing, for anyone who has invested enough time in learning its arcana to impress people with stunt-git trick shots, and that can be fun. I have definitely enjoyed being the knuckle-cracking "stand back everyone and stop panicking, I know how to fix this" guy on a number of occasions, but that is a sideshow. You can do circus tricks with power tools, and some people do, but it's not the reason the tools were made.

    Git has a compelling storage model, and a commit graph workflow that solves a bunch of annoying challenges with incremental and concurrent code editing and integration, efficiently and better than previous source code version systems. That's why it became a huge success.

    Git's horrible ergonomics and implicit barriers to entry, but compelling powers of sharing and integration, allowed GitHub to spring into existence, as a multi trillion zillion company from out of nowhere, just by slapping a nicer set of user abstractions on top of git's ugly robotic core. (in the process accidentally inventing some other significant ergonomic problems, like pull request based workflows, and IDK, tag driven releases, but I guess that's a different blog post)

    Git's foul ergonomics are what pulled me into learning magit, which has a reasonably steep learning curve of its own, but also follows the emacs way of having a lovely manual. It's a better thought out UI. It also leverages many common emacs behaviours, so when you're working in emacs already, which I typically was, again, you get this reduction of context switching.

    The mythical 'Flow State'

    Why is this so important? At this point, it's tempting to dive off into a side bar about "programmer flow state", a long held shibboleth of the developer community about which I don't have much truck, but many thousands of words can be found about it on the web already. - I don't like anything that reinforces programmer identity as a higher state of being, and I dislike how easily this concept is weaponised towards shunning collaboration and social work (e.g. "coders must not be interrupted during holy flow state") , again this is clearly a different adjacent blog post - but the notion is not fundamentally baseless.

    Programming tasks often require holding a lot of accrued context about a chain of thought, and carefully expressing those in a narrow, precise domain, incrementally progressing towards a well defined future state. It's not easy for a human mind to do that, it takes a bit of effort. Effortlessly breaking out of one domain into another mode of expression isn't really possible. To do that even passably well, perhaps you would need a different kind of "mind", even?

    I think version control is interesting here, precisely because it's liminal stuff. It's programming-adjacent work in a certain sense, it's a chore - any time you have to break thread to address some version control nonsense or busy work, you are in essence interrupting yourself. It makes a lot of sense to try and find a low friction simplified user interface to mediate these kind of tasks. Like GitHub, or magit. The ideal is to minimise the amount of disruption you face while working on these background or side channel threads of work. You can mitigate against this in two main ways I think. You can look for ways to simplify the UI to better suit a particular working context, as we have been discussing; alternatively you can divide the work and make the secondary context a first class task that's managed separately.

    One way to do this is to structure the way you work so you can plan your version control stories a little ahead of time, and use discipline in your task management to make it better fit onto the VC, with strategies like formal branching and ticketing protocols, rigorous task mapping that accounts for tech debt, probably integrated into a project management system.

    Another way you can do it is to make it literally someone's job and push the load out sideways - examples of this might be code review protocols, gatekeepers for merges (in the olden times, with less sophisticated version control software, I've often worked on teams where there was a nominated 'merge master', whose entire job, or a large portion of it would be to basically do the integration and harder version control work on behalf of the feature developers), also other adjacent roles like scrum masters, or DBAs.

    How about SQL ?

    That train of thought got me thinking about SQL. I think SQL is another interesting example of 'programming-adjacent work', although it might be a bit more subtle of an example than it first appears. Let's have another digression then. I really like SQL, although it's obviously covered in warts and sharp edges, I've always appreciated it's utility, and to a certain extent it's ergonomics. It does share some of the properties I've discussed with git - it's very often a boundary, and context shift away from the main thread of programming, and programmers tend to hate it, and like to avoid it, and make up nasty memes about it for slack, and that kind of thing - just like version control (or meetings 😘), it's essential and necessary work in a lot of software development, but it's another liminal place, where you have to get pulled out of the context of thinking about your feature work, and software architecture, and land in another place for a while with an annoying external syntax, and a lot of aggravating round trips into different tooling.

    Indeed, over the years, a very common programming pattern is to try and slap a more program-ergonomic abstraction layer in front of the SQL, once again to try and minimise the friction and narrow the interface - I'm thinking of things like various ORMs, 'noSQL' database engines that bring the data modelling and querying closer to the application layer, all of them moderately successful, and yet SQL still hangs around everywhere, slightly annoying everyone, like a remote senior cousin inevitably invited to every wide social gathering, tolerated, rather than enthusiastically invited.

    That's because SQL is already an ergonomic abstraction. It's kind of the ur-DSL. SQL is there because databases have a lot of inertia associated with them. All the data is often where the money is. Data is the raw stockpile of materials, the raw ingredients of the information that's necessary to run large information technology applications. Data tends to accrue value cumulatively, and you want to keep it all in a big lump in one place (this is why we have terms like 'data-mining' and 'data-warehousing'), so you can correlate it, and leverage sexy network effects from having it all integrated into one humongous data domain. Once you pass a certain critical data mass, you need to access it multi-modally , i.e. there will be many different use cases for the same information sets, different users and different applications will emerge from, or require access to, various intersectional pieces of that data blob. Now, reading and updating that data blob by hand, in your preferred programming stack would really sting. You'd still have to leave your application software context, but you'd now need to delve into a world of low level file systems, and data packing, and indexed data access, and write locking, and concurrent editing, and wire protocols, and cache invalidation, and the whole nine yards of that side of computer science.

    I pause a little here, because I realise I'm probably making it sound kind of fun to a particular audience segment (amongst which I include myself, periodically), but let's not lose track of the core point. If your intended task is to make a cool dating app that helps your users get laid, low level storage systems coding is a horrible, high effort context switch away from the feature track you're working on. And of course, all this data access code you're having to do while you go will also need tracking in your version control system, more context switches. Instead, we SQL.

    LLMs suck at SQL though

    As an aside, I have found LLM coding assistants to be generally pretty bad at SQL writing. I have two hand-wavy personal theories about why this is the case

    1. Programmers, on average, in my experience are really pretty bad at SQL, probably because of the reasons we delved into above. So the training set of the models is full of low quality material. (cheap shot, maybe? 😅 Cut me some slack, I've spent many hours of my life fixing other people's bad SQL)
    2. Effective SQL generation requires a lot of external context the model doesn't have - not only the entire database schema, but also ideas about the data contents, and distributions that a pre-trained model doesn't necessarily have any access to. I think getting good SQL results from LLMs would need a lot of context prompting, or specific training. Still doesn't entirely explain why they're so bad at basic join syntax.

    I'm tempted to infer something from those two sub-points about SQL requiring fundamentally different kinds of reasoning to other forms of writing, but I'm probably just seeing the face of Poseidon appearing in the patterns of my mental sea foam... I'll leave it there for now. (a third parallel blog post? This stuff is getting fractal). BTW - The name for that fascinating phenomenon is *pareidolia* , and it's something worth keeping in mind when discussing "AI" concepts...

    The surprising invention and nature of SQL

    Ahem... Another really interesting aside is to have a look back at the history of SQL. SQL is rather old. It's basically my age, and I've already pointed out I've been using emacs professionally for at least a few decades. SQL emerges from IBM in the 1970s, as a research project, greatly influenced by E.F. Codd's classic article 'A relational model of data for large shared data banks'

    The primary designers of SQL were Don Chamberlin and Ray Boyce, part of IBM's system R research project, who were tasked with looking into ways to apply Codd's relational / mathematical principles of database modelling to IBM's database businesses. IBM's database business at this point was most of IBM's business, and IBM's database business was pretty huge. Prior to relational databases, the existing big iron database management systems were awkward weird transactional / hierarchical databases, like IMS/360 where you pretty much had to write a specific computer program to be batch executed from a queued transaction management system. Each 'query' was more akin to an independent program. In order to change the report, you'd develop a new program, and you would need appropriate programmer time and skill to do it, and your best turnaround for results would be several hours, probably more like days.

    So the system R researchers wanted to make this more flexible, but their ambitions didn't end there. Chamberlin and Boyce wanted to make information retrieval accessible to non-programmers. Here's Chamberlin

    "Ray and I hoped to design a relational language based on concepts that would be familiar to a wider population of users. We also hoped to extend the language to encompass database updates and administrative tasks such as the creation of new tables and views, which had traditionally been outside the scope of a query language.[...] What we thought we were doing was making it possible for non-programmers to interact with databases. We thought that this was going to open up access to data to a whole new class of people who could do things that were never possible before because they didn’t know how to program."

    Can you see where I'm heading? SQL is a frantically successful example of what we used to call '4GLs', (fourth generation languages), when I was a school kid, (although by that point, the text books - and what we didn't yet call the hype cycle - were already breathlessly excited by the imminent arrival of the Fifth Generation Languages, and systems...). The terminology is dated, and stretchy, and marketing-fed, but the central gist is - 4GLs are languages that were operating at a higher abstraction level. Your inputs and controls would describe a program at an abstraction level much higher than the operation of the system, and the 4GL would write a program for you that ran at the lower level. All a bit hand-wavy, but SQL querying has some really interesting properties related to this delegation.

    • it's declarative. Your query describes the data structures you want to retrieve, and the actual details of how the data is retrieved from disk is decided for you by a query planner.
    • it's live and interactive - you can interrogate the system interactively at a REPL
    • it's got a weird-as-hell syntax full of special cases that tries valiantly to use ENGLISH LIKE words, all IN UPPER CASE SHOUTING that deal in terms of database structures and relational operators, not computer terms like bytes and loops and sorts.

    The query planner is worth a little thought - The query plan uses a bit of maths, and some heuristics and a bunch of information and sampled data about your system, and works out a series of reads and sorts and filters that produce the data structures your query is requesting. Crucially you don't tell it HOW to do it, just WHAT you want it to do. (sorry, the upper case is a bit addicting, I'll stop). Most of the time you don't think about it too much more than that. However, most SQL database systems will show you their plans if you ask them, typically by using the EXPLAIN keyword, which will show you what the planner thinks it should do to build the result set you wanted. If you don't like what it's decided you can't tell it to do things differently, but you may teach it to do things differently, typically by updating the available indexes and constraints, or maybe by re-balancing the statistics it uses to decide about cardinality and seek times, and that kind of thing.

    Here again we have a division of labour - the idea is that the structural and statistical and optimising and runtime bits of maintaining the SQL system can be delegated to the programmer and technician classes, who can be more concerned with the implementation and operational parts, and the query writer (a non programmer, if you remember) can just get on with expressing their tasks in a lower friction, narrowed domain, where they don't have to context switch out as hard from their task at hand, writing lovely business reports for the sales and finance teams to make slide decks from.

    Now I'm not saying that SQL writing is 1970s prompt engineering, but I'm also not not saying that, right?

    Here's Don again, after the fact

    "Ray and I were wrong about the predominant usage of SQL. Typically, SQL is embedded in a host programming language and used by professional programmers"

    Is SQL any good though?

    SQL was, as I have said, a spectacular success. It's still bloody everywhere, fifty years on. It was also an abject failure. The syntax is a mess of unaligned clauses with special cases everywhere (INSERT and UPDATE have such radically different approaches to clauses but sort of do the same thing. So does DELETE really). You need to understand maths to do it properly. You also need to understand the precise details of the database schema. The database schema is maintained in a separate dialect that's somehow intertwined with the querying DSL and uses the same interfaces, but again obeys a different syntax and semantics. Record locking is both implicit and explicit. The concurrency model is insane and nobody actually understands it properly. NULL breaks everything including query logic. And most damningly, it's never used as an interactive REPL by non-programmers. It's folded into programs mostly. In fact these days, an eye-wateringly huge number of applications bundle an entire SQLite embedded RDBMS inside their application deployment.

    Repeated interface patterns, and why I care

    There's a pattern emerging here I think - querying and reporting wasn't quite entirely programming adjacent busywork, but it was business-adjacent drone work, and SQL was an attempt to narrow the interface with an ergonomic DSL that got out of the user's way and reduced the scope of the context switch needed to engage with the reporting system. It kind of failed at the interface, if you ask me (and most folks who have to use it), but it did succeed amazingly at reducing the complexity of the context switch. Using SQL to mediate data persistence inside your application is kind of like using magit to fold your git operations right inside your emacs workflow, they both serve to shrink down the cost of flipping out of your primary task domain, into some other essential, dependent domain.

    Either reduce it to a tight DSL, or extract the work and organise it so it can be delegated to another worker. Sometimes, you put a DSL on top of a DSL, to tune it down even further. Sometimes you build a team to own the work in this domain. What if you build a DSL over a DSL but the DSL could sort of work like a team you can delegate tasks to? That's a compelling interface, that is.

    When I look at it like this, using an agent REPL in zed to run git operations strongly reminds me of that (failed) SQL/4GL promise. I tell the thing the result I want from it, and it builds a plan of execution, and I can iterate on that, interactively. Only this time, I'm using literal English sentences to describe the narrow domain the system has trained itself on, not some bastard awful pidgin version of it, that somehow ends up with the worst features of both natural languages, and programming languages. Sorry SQL, it has to be said. And you know what, I kind of love you anyway.

    I think this pattern can be found in several places of software development, if you squint right. Programming-adjacent work gets reduced with DSLs or narrow abstractions, which bring the benefits of reducing that tedious, expensive context switch. I guess we have lots of it in CI automation, pipeline building, that kind of thing. DevOps is built out of this stuff, infrastructure as code, deployment charts, meta deployment charts, run-books, playbooks. These domains are also places I've found LLM-based assistants super helpful - help me grind out some yaml please so I can add a pull request pipeline that does this thing I just thought of, without me having to spend quite so much time reading up the stupid YAML syntax for this weeks CI system, and spending hours sitting in a push/fail/edit/push loop on some git forge. GraphQL over REST apis - maybe? Unit test generation and test harness design is mostly busywork in a DSL following some declarable constraints. I think I already mentioned figuring out the precise type annotations for things. It makes me think that coding-assistants are perhaps more of a user interface paradigm than they are a coding one. More like a 4GL than a semantic IDE.

    In conclusion

    What's my point? I'm not completely sure (he says after several thousand words)

    • I like zed, I'm finding it more useful than I thought I would, and I've stuck using it on and off for a few months now.
    • I keep finding small useful things for LLMs to do, and I enjoy that process.
    • The most value might be in 'programming-adjacent' things - e.g. as I mentioned, I'm now personally only writing a small proportion of my commit messages by hand, and I think my commit messages are considerably better off for it.
    • Tools that reduce the context switches for 'programming-adjacent' work will win
    • Conversational interfaces are very low friction for human minds.
    • LLMs are much better than humans at context switching to a different domain, while keeping track of and applying accrued context across different tasks
    • Is there something you need to do that applies a tight but boring DSL across a defined data set? A model might actually be quite good at that. Another weird thing I've found them almost spectacularly good at is networking and debugging - describe a topology, give it a tcpdump, and watch it spit out a bunch of diagnostic suggestions and potential remedies for you to use.
    • These tools are one of the biggest tech shifts I've seen in my rather long, slightly storied, career that spans a whole bunch of tech paradigm shifts.

    Summary footnotes

    1. This is just some opinions, inspired by how astonished I was to successfully use an "agentic" tool to run some gnarly git stuff I would have had to pull out the manual to do. There's all sorts of important discourse about LLMs and the out of control tech hype cycle around them that I don't go near, and I'm not writing a manifesto
    2. I'm not a historian, or a first hand witness of the pre-RDBMS data scene, although I did work alongside people who came from that background, so I do have a lot of secondary source exposure. I haven't done much research other than light web searching, so please take my historical characterisations with the appropriate amount of salt. I know pretty much zilch about how IMS/360 actually worked, I just remember the name, I'm generalising.
    3. I wrote this blog post by hand, it's not the sort of thing I think LLMs are much use at. I'm trying to express my own opinions here, in my own voice, because I was excited about a couple of ideas and wanted to try sharing them
    4. I wrote this blog post in emacs though.
    5. I probably won't write any of those other parallel blog posts, who has time to write blog posts?
    6. I did use Claude to fact check the post. Don't laugh, they can actually do that sort of thing now. This stuff moves fast.
    7. Finally, I feel I ought to note that Ray Boyce died tragically young at 27 in 1974, shortly after the presentation of the SEQUEL design paper. He left an astonishingly outsize impact after such a short career. Put a dent in the world, as they say. Don Chamberlin, happily, is still with us so far as I know.
    posted by cms on
    tagged as
  2. You wait two and a bit years for a blog to show up and then all of a sudden there's two of them!? I seem to have accidentally started a new blog. OMG.LOL, a rather whimsical web service I've belonged to for a while, is dropping new features on it's members for the holiday season, in the form of a charming animated advent calendar. The theme of the season is 'blogging', and right there on day one, they gave us a weblog service, weblog.lol.

    So I'm over there at cms.weblog.lol. And I appear to be attempting to post a review of a different christmas pie every day, in a fit of seasonal over-enthusiasm. It will never last. So far I'm three for three though, and having fun.

    Do check out omg.lol btw. It's my favourite kind of internet thing really, user-focused, a paid service (no grotty ad trafficking), (it's cheap too, right now it costs $5 per year, although the price is rising in January, get in soon if you want to, you can buy several years in advance) For your money you get a cute namespace of your choosing. Obviously I am cms, and a bunch of other neat stuff. Weblogs! (we already mentioned these). A customisable home page. A DNS subdomain, to do what you like with. A nice short email address, with forwarding. A mastodon instance, an irc server for members, a pastebin, probably some other things I've forgotten. Keybase proofs!

    It's cool! Loads of 'small web' vibes, a built in community, and a bunch of fun tools. Definitely gives me nice warm fuzzy 'early internet' feels. I've managed to convince two other people I know to sign up so far. You should too! But in the meantime. I'll be over there blogging. And also maybe over here blogging? I'm such a blogger these days.

    posted by cms on
    tagged as
  3. I'm sick of Twitter, folks. I've decided to do something both mild and drastic about it. For 2018, I have resolved to stop using it.

    I am not sure what it is for anymore, it certainly doesn't feel like it is for me. I think I've been disengaging slowly for the last couple of years, and in 2017 I repeatedly found it too aggravating, and depressing to engage with. I think I would have already ragequit, had one of last year's resolutions not been that silly selfie thing. Thus a seed was planted about resolutions and exits. Brains often work that way. (Referendums are silly though)

    I was late to twitter. I downloaded my twitter archive, whilst I was scraping out all of the 2017 selfies, and apparently my first tweet is from Dec 2007.

    I was late to Battlestar Galactica as well.

    I probably spent a little while reading twitter before registering, although I don't remember anything specific. I can't remember why I signed up in the first place. Looking at that first month of odd, stilted entirely quotidian status posts, I can tell I'm working on Logical Bee, mostly alone, babysitting that dog. It's winter. Maybe I'm lonely? I have a dim memory of thinking it was pretty dumb for a long while before getting involved at all. I remember fiddling about connecting it to things, and experimenting with SMS tweets and emails. I don't think it really clicked for the longest while. I remember a sense of a clique I wasn't ever going to be able to get into. That first wave of web-natives, younger than my generation. More entuned to a web of application services and APIs than hypertexts and data servers. I remember tweetups being a thing, and a Bristol one being announced, and spending an hour or two before deciding firmly I wasn't the kind of person that went to that kind of thing. I quite wish I had gone now. I didn't used to be a very good joiner-in of things. I'm not much better at that now. A little bit, perhaps. Now I know to try.

    It took the longest while, but eventually it clicked. I liked the lightness of it. It was sort-of social networking, but social networking at arms length. Lots of irony, lots of whimsy. I just remembered the earliest phase of my binning Facebook was to convert my facebook to just echo my tweets back into it, for the muggles to read. I remember being very snobby and standoffish about things like hashtags and @replies. My first reply wasn't until August 2008.

    To Daveh! Either I don't know how to reply yet, or the Twitter archive has incorrectly threaded that reply back together. Either seems plausible.

    I didn't use a hashtag until May 2009. Even then I was repurposing "get off my lawn" meta-commentary. Amused to see that my next half dozen hashtags are complaining about moonfruit's use of them for viral marketing. Many years later I ended up working there for a season. Again we see the seeds are sown, and the fruit is reaped.

    Not too ashamed of that one. It's interesting looking back at tweets like that, I have a sense that the prevailing vibe of Twitter at the time was that the cool kids were beating out the idiots. I don't get that vibe off Twitter now.

    By this point it was clearly very firmly entrenched in my daily desktop routine. Once I got hold of smartphones that could run twitter, I think my usage ramped up. I remember by the time I got to last.fm, I was tweeting all the things, curating a couple of hashtags (#fantasypeelsessions for serendipitous word groups that sounded like band names, #fisharecool for cool fish facts), running multiple joke twitter accounts, writing bots, and generally really enjoying it. I remember when I got to Makeshift, and twitter seemed to be used as the wiring behind at least half of everything there, it then seemed like a necessary internet plumbing for web apps. With hindsight I think that was the peak. It was downhill from there. I don't like it any more, I have detected an opportune moment, and I have decided to leave. At least for one year.

    I'm not going to use this post for arguing about why I think it's broken. One of the largest problems I have with it is the sheer concentration of negativity. And one of the reasons I want to move away from it is to focus on building things that are more positive. It's not just Twitter. I'm pretty broken-hearted with the state of the web in 2017 - it's very far from what I signed on to help build as one of those idealistic Gen X web 1.0 types. And again, rather than just bemoan that, I'd rather start focusing on ways to think about fixing that. And for me, in 2018, this means I'm going to go small, and focus on building things and content I can own, in the sidelines. I expect I will be updating here more. I plan to double-down a bit harder on indieweb things, and federated stuff. POSSE all the things. Death to silos. I've been experimenting with micro.blogs and mastodon.social, and I want to play more with beaker and dat, and blockstack and IPFS and other idealistic p2p proto-webs. Maybe even frogans?. The real web looks more like that. Maybe I can help figure out how to make it a bit easier for everyone to clamber onboard.

    "But CMS, I think we're Twitter-friends, what does this mean for US?"

    First off, that's flattering, almost-certainly-entirely-imaginary-cms-fan, thanks! I like you too! Occasionally some of my tweets get as many as five or six engagements, and I do enjoy keeping up with some lovely people. Some of whom I met or perhaps only know through twitter. I'm sorry if this feels like a breakup; It's not you, it's me, as they say in the rom-coms. (Actually, I'm not dumping anyone.)

    Something else I want to push for in 2018 is better quality, stronger, social engagement. I want to cultivate more real contact, more high bandwidth engagement and connection with all the good people. This can work two ways of course. If you only really interact with me on a tweet by tweet basis, and you think you're going to miss that, then do please reach out. We can have coffee, or get beers, or just go fish in a lake or something else entirely. And I'm going to be pushing myself to reach out to more people in turn myself, something I'm astronomically poor at. Please help me with this if you can!

    IRL networking I plan to ramp up a bit. More meetups, tech and maybe otherwise. Maybe I'll rescind my conference ban. Maybe I'll start some of these things, or start helping to organise them more.

    I'm not doing an *infocide*. As well as publishing things hanging from here, which has plenty of RSS feeds, if you can still figure out how to integrate those into your workflows then I'll probably never be very far away. Also, if you look at the home page, there's a list of dozens of other not-Twitter platforms you can stalk me on or connect to me via (maybe we are already!) - If my plan comes together, I hope to be syndicating and updating the useful ones of these more actively.

    I don't intend to delete or remove my twitter account, and I will set things up so I still get notifications, so nobody gets ignored. I might even automate some notifications to my twitter feed about updates to things elsewhere. I'm just not going to be participating as a human. I expect I will remove all the apps, so my turnaround on mentions might slow right down.

    If you're in the select category of people who only know how to contact me with twitter, there are many options. I haven't changed my phone number, should you know me well enough to have one of those. If you're looking for a way to DM to me, I cannot endorse keybase strongly enough. I think they're trying to do something really interesting, and could do with some more network effect. Sign up to keybase, and keybase message me, I love getting keybase messages, and I always respond. Invite me to your keybase groups! Also, please share your slacks and your newsletters and your mailing lists with me, if you think I'd like them, or they'd like me.

    Email still works, and I still read it. My address is even on my website.

    Finally, if you're reading this, and we've Twitter interacted in some way, let me say a goodbye for now. If I was annoying, or argumentative, I'm sorry, I can be hard work soemtimes. Maybe some of that might have been caused by the platform? If I was fun or charming or interesting, then let's work to stay in touch! If you don't really care, you're not even sure how you got here from off of twitter, that's cool too, maybe I'll see you again in a year from now.

    posted by cms on
    tagged as
  4. Further work at hooking into micro.blog. If I extend my slightly moribund 'linkblog' special case formatting to a new 'short post' class, and then generalize this to cover indieweb 'notes' style posting, then I should be able to build a dedicated feed for notes that fits in better with micro.blog. These short updates will just appear inline on the blog site as de-emphasised text, without article formatting.

    posted by cms on
    tagged as
  5. I finally wired up micro.blogs! I have a micro blog. I'm not really sure what it is for but it's there. I like it anyway, because it's called cms. In order to get it working, I had to make RSS work slightly better than 'barely', and so now I have an RSS 2.0 feed. The 00's are back! It's all about microformats and POSSE and syndication and decentralization, and taking back the web.

    I appreciate it's an outside chance, but should you have a micro.blog account you can follow me on there, and reply back, and be friends, and stuff. Should you not have a micro.blog account, but think you might like one, HMU, I probably have some invites or something The indie web can never die!

    posted by cms on
    tagged as
  6. I've been very gradually upgrading this site back to life for a few years now. Very gradually #amirite . However, after earlier this year having found myself accidentally on the front page of Reddit, HN etc. with my post about building the IMDb boards , I found myself slightly embarrassed, not only by the amount of attention ( 40k+ uniques in the first two days, holy shit! ), but also by people pointing out how clunky the site is to read. Often several times a day.

    The styling on the blog section, much like the rest of the blog section, wasn't in a terribly well developed state of completion. I just threw together some hand-written CSS to approximate the look and colours of my last existing Wordpress theme, which I had been fairly happy with. Now that theme was set up maybe ten years ago, and my initial port over to this 'new', self-build CMS maybe four or five years old itself, and I had given no thought at all to mobile, or in fact any screen device very much different from my own laptop display. And my main laptop display is a 1024x768 pixel non- IPS Lenovo ThinkPad x220. That is probably a significantly worse screen than your phone has.

    In 2017 it's pretty stupid to build web pages just to be viewed by desktop browsers, so today I'm pushing out a rebuild of the display layer and theme, that hopefully works a little more responsively across varied devices. It should also be easier for me to evolve. I hope it improves things for my handful of select readers. I'm not terrifically good at front-ending, and my heart isn't often in it, but I have tried my best.

    I'd like to be updating this site more frequently again, he writes, like one of those bloggers apologising for never blogging , but a large part of getting any kind of schedule working there, is streamlining the publishing workflow. To that end, as well as a more modernised front-end and theme, today I've also released a new site deployment system, that allows me to update the site software more easily. This is clunky, but at least automated. Previously everything was just checked out into a home directory, hand compiled and run on the server. Now that's mostly still happening , but now it's all scripted with configuration management tools so I can release updates like this without having to remember exactly how to set it all up again by hand from first principles.

    Of course, for writing articles, I'm still shelling into the server and hand writing html files like a farmer , but it's all steps in the right direction. Sometimes I don't shell in to the server, I author the posts directly using emacs tramp-mode which practically counts as using a GUI round here.

    posted by cms on
    tagged as
  7. ...as if millions of voices suddenly cried out in terror and were suddenly silenced. I fear something terrible has happened.

    Download an EPub edition of this post courtesy of redditor agonnaz

    Update: My erstwhile colleague Mathias wrote up his thoughts about his role in this story

    scribbled design notes

    Some time on Friday, IMDb announced that they intended to shut down their message board system, permanently. I don't find this to be a particularly surprising decision. I'm more surprised that the message boards are still there, in 2017, seemingly essentially unchanged for the last fifteen or so years. They've had a few coats of paint, and a handful of feature improvements, but they largely seem to be backed by the same system design developed by the in-house tech team, way back at the dawn of the century. And for the bulk of that early development time, I was the primary developer. As it has said on my homepage for many years, 'you can blame me for the message boards'.

    A long time ago in a galaxy far, far away

    I was incredibly excited to be asked to join the IMDb developer team at the end of 2001. Aged 30, with almost a decade of professional software development under my belt already. Although 2001 sounds today like it was the relative stone-age of the modern web, which of course in many ways it was. At this point I had already spent several years working on basic web applications in the original dot-com boom, and I was in-awe of the IMDb , which even back then was a somewhat venerable internet institution. Founded in 1990, it thus predates the invention of the World Wide Web by several years, having started out as lists of data shared via USENET posts. At the time I joined, they were a couple of years into their Amazon ownership, and starting to expand the team.

    As I started, they were just on the cusp of launching IMDbPro and had an ambitious roadmap to completely rebuild the main website from the inside out, using the shiny new technology stack the small development team had built from the ground up to power the IMDbPro application server. This, I thought was a very clever hack - imdb.com was a hugely popular website, and this approach of adding industry focused features to a subscription remix of the site built on top of the same data feeds (still basically formatted text lists, using the conventions of the old USENET based tools) meant that in effect we could use the far smaller user base of the pro site as a test-bed for the new tech, and gradually port sections of this across to the terrifyingly high volume 'consumer' site, without having to do a rewrite and a relaunch. To further sweeten the deal, if you look at this arrangement, this meant that the test-bed users would actually be paying to break in the newer software, and helping you iron out the bugs.

    In 2001, a shiny new high performance web stack meant perl . Apache 1.3.x running mod_perl to be more precise. In case you don't know what mod_perl is, it's a piece of semi-deranged brilliance that wraps the perl language interpreter into an apache module as a persistent runtime and exposes the internal API of the HTTP server to it. This lets you write applications that are now effectively themselves apache webservers, with direct access to every part of the HTTP serving lifecycle. Furthermore, by using the other neat hack, Registry.pm you could use modules or scripts that had been designed to work as CGI scripts, and get the some of the same speed boosts, unmodified. With these techniques, you could write perl applications that went almost as fast as Apache could, and in the late 90s/early 00s it was this or PHP. PHP back then was pretty grotty, I thought, and the cool kids were all using perl. Perl had libraries, and excelled at gluing existing bits of UNIX together. This meant you had to write far less of the application by hand. Yup, by hand . Let me dig into that a little bit

    It's the pictures that got small

    Writing web software back then was a fairly different prospect. In my circles, we didn't really have much in the way of frameworks. There were a few enterpris-ey things floating around that converted your big IBM and Oracle and Microsoft client/server application into some kind of terrible intranet suite that required ActiveX support to load any pages, and I'd poked around with Zope with some interest, but by and large if you were doing anything interesting, you used FreeBSD, or linux (2.2, with SMP support!).

    You'd most likely use Apache 1.3, forking, and write your site as a combination of static pages, server side templating and CGI exec-d programs, in some kind of UNIX scripting language (usually perl, but any of the usual suspects were relatively common, including actual honest-to-god shell scripts), or maybe you'd write a performance critical CGI as binary in C.

    For data processing, you might connect your application directly to a pre-existing company RDBMs, if you had such a thing and your DBA, if you had such a thing, let you, or you might deploy a SQL db on or nearby to your web host - usually MySQL 3.22 with ISAM and a quasi-religious intolerance for foreign key support but that was OK you could do all the data validation in application code. ( A bit like JavaScript databases in 2017 )

    We had libraries for common tasks, like parsing wire protocols and file formats, and wrapping utilities to do things like generate or resize graphics, but you'd stitch a selection of these together in an ad-hoc fashion to make a 'system'. A typical web stack would be table-based HTML with attribute styling and inlined images for typography and spacing , possibly pre-rendered, but maybe dynamically generated, then some CGI scripts for user management full of hand coded cookie and session tracking. A relational database for persistence, using hand coded SQL and a custom database schema. Page generation via a self-written templating system, gluing skeletons of layout-oriented HTML around variable interpolation with inline conditionals. This part would often run as server-side includes, but sometimes this would also have just been handled by CGI scripts.

    Maybe you'd have a hand built filesystem cache in front of this. 'Front-end' back then would often build static page representations, first in Photoshop or Illustrator , which would then be converted into single HTML page masters in Dreamweaver or FrontPage and then handed over to the back-end coders to clean up and crack apart into templated fragments, by hand. Single byte string encodings through-out, no threading, a light veneer of Object Orientation over internal data structures - you'd have a small cluster of actual physical servers, perhaps in a data center, but often on-premises, sometimes in racks, sometimes actual tower servers in the corner, directly connected to an internet router of some pitiful capacity. Sometimes your cluster was as small as one machine.

    Architecturally you'd have a webserver, perhaps two if you wanted to split 'heavy' dynamic serving from lighter or static content. Your database might end up on its own box with better IO and networking. If you had enough web servers you might put some kind of load balancer in front, perhaps a HTTP reverse proxy as an accelerator cache (often another Apache, sometimes Squid ). In 2001 I'm not sure I fully understood what a CDN even was . You'd deploy with FTP or maybe rsync , sometimes the production filesystems were locally mounted via NFS or SMB and you'd just copy stuff over, or edit it in place. Version control, if you even had any might just be renaming files, perhaps SCCS or RCS. Advanced users might have CVS. Designers might have a pre-OS X Macintosh , suits would use Windows , developers had something more of a free-for-all - windows 2k , desktop linux , I used BeOS for several years whilst that was still a thing, and seemingly everybody , but everybody used emacs to write code - GNU emacs was common, but the cool kids were using XEmacs . Sometimes a remote XEmacs client on your deploy host attached to your local X11 server over the wire . Crazy days.

    My God, it's full of stars

    So that's the scene in 2001 when I joined the amazon.com family as an SDE , working on the new IMDb platform. I was a fairly hot perl programmer, having spent a good few years designing and rewriting custom web 'frameworks' and optimising mod_perl architectures. I was really good at SQL, at least I thought I was in comparison to most of my peers, and I had developed a particular fondness for the then slightly uncommon PostgreSQL database engine . I'd done quite a few web things - early corporate intranet portals, hobby sites , moderately popular dot-com publishing houses , but this was a step change into an entirely bigger league.

    In reality, especially as I look back with hindsight, I can see I had very little idea what I was doing, but hardly anyone did. There wasn't a lot of published material on architecture - everyone read Greenspun , but there was nothing like the modern tech web, scalability porn, conference circuit. No HN , no Reddit , no twitter , no Facebook, and looking things up on StackOverflow was still almost a decade away. It wasn't even that easy to find what scant information there was, you have to remember that Google was barely yet a thing. Information sharing tended to happen on mailing lists, using actual email, or maybe still on USENET. ( Paul Graham hadn't yet written ' A plan for spam ', and we didn't really have functional automated spam filtering).

    IMDb had an unusual working setup for the day, as befitted it's birth from a federation of USENET correspondents. Everyone worked completely remotely, scattered around the world. At the time I joined, there was an express preference for staff who could attend a weekly company meeting over lunch, near Bristol ( in a cafeteria, attached to a swimming pool ), and the majority of the tech team building the software was now based around this area. Home Internet connectivity was still largely 56kbps or lower dial-up , possibly metered, although I was lucky enough to be in a part of Bristol eligible for an insanely fast 1Mbps cable connection .

    Anticipating having to work on significant amounts of DP, potentially offline, I asked if I could be provided with a small server with SMP and RAID capacity, and was rather surprised by a small tower HP Proliant rig turning up at my house, cocooned onto a loading pallet too big to fit through the front door. I had to unglue it piece by piece and carry it up to my 'home office', a box bedroom full of IKEA tables, slightly too tall to be comfortable desks, and assemble it in place. I christened it mavis.imdb.com, and installed Debian stable on it, which involved most of a day figuring out the hardware RAID drivers, and from that point on it's shrieking fans and disks were a constant part of my daily life for the next half-decade. Eventually a house move allowed me to get it into a makeshift server cupboard where I could deaden this persistent din behind a door and blankets and curtains. I occasionally wonder now, in my middle-age, if I have a frequency gap in my hearing to match that particular pitch, but if so, it's not affected me enough to care to get it measured. As the noise tended to interfere with music, for the first few years I developed a habit of listening to BBC Radio 4 morning to midnight, and therefore, when there wasn't a test match to listen to, for a brief period of my life I developed an unusual degree of expertise in the comings and goings of 'The Archers' .

    One consequence of the remote working, and patchy connectivity was that the development work in the tech team was informally silo-ed up into sub-systems that individual engineers had ownership over. The very first task I worked on, after getting a working build of the entire stack onto mavis, was porting the statistics page across to the new web stack (internally known as 'mayhem', after project mayhem , everyone was big on movie references, naturally) by way of familiarising myself with the application and infrastructure. I made a perfunctory stab at that, and then I was searching around for something more substantial to own. The forums, or 'message boards' seemed to be a natural candidate.

    The most recent piece of work I'd done at my previous gig , had been to contribute a threaded discussion system to our general purpose content management system, which allowed a tree of conversations to be attached to any content id in the catalogue, so the site users could have a threaded comments section attached to any content. This had worked pretty out well. By contrast, IMDb had a pretty threadbare generic forum system, a standalone phpbb installation, almost entirely isolated from the rest of the system, organised into a few dozen general purpose with I think even a separate login system.

    A business goal for the next year was to drive up user registrations, and the forums system seemed like a good feature to assist with this. It offered additional site value that was only viable to registered users. Another target was to integrate the boards system more directly into the movie database, allowing people to have conversations directly attached to the pages for movies and shows. Another important requirement was to allow for a system that would let the data contributors directly communicate with the data management team. So I was tasked to do something with the forums to meet these broad goals, and the implementation and design of it was largely up to me, informed by regular feedback from the wider team onto weekly progress reports and via the team lunch meeting.

    We're going to need a bigger boat

    I considered a number of approaches.

    • I could have extended the PHP forum system as was, to support the new features, but I didn't really consider that for more than a couple of minutes - it was PHP, which I didn't know terribly well, and disliked, and would be harder to tightly integrate with the rest of the mayhem app, which was a domain optimised mod_perl web service.
    • I wondered about wrapping a USENET service, which had a lot of appeal, in as much as a lot of the base mechanics of hierarchy would be already covered, and a highly scalable architecture with a portable standard with several existing back-end implementations. I really liked this idea a lot, but I rejected it eventually when I realised that it would be difficult to build an integrated web front end that offered as much functionality as a stand-alone newsreader. If I had been able to find a decent open-source web NNTP client I might very well have done this.
    • Another alternative would have been to find an alternative forum system that was more amenable to customisation. I considered using the slash system that powered slashdot.org, but I rejected that because at that time it had a reputation for poor performance and uptime, and was struggling with coping with trolls. I really should have paid more attention to these ideas , both of which would come back to haunt me
    • eventually using a mixture of naivety, hubris, ego, enthusiasm and pragmatism I decided I'd build something custom, scaling up over the ideas I'd used for the comments module in my previous job.

    The basis for that system was something I was quite proud of, and in some senses it was quite a clever hack. We had wanted threaded discussions, but it's famously tricky to model trees in SQL. My first attempt, with hydrating flat lists into trees at runtime from a SQL result set was computationally a little bit expensive for the hardware of the time, and slowed up page rendering in the articles with comments.

    So I came up with an ingenious scheme. I'd store several sort fields against the comment records - one representing the vertical position in the thread, and one representing the indentation level, and every time a reply was inserted into a comment thread, I'd compute the correct indent level by adding one to the parent reply, set the vertical position to one larger than the parent, and then update every larger sort sequence to increment it by one; so that they were sequentially stored in thread order when read by that index. As I was storing the timestamp, and a sequential post id, I thereby had a system that could trivially read back conversations by order of time, order of posting or order of reply. This meant that posting was relatively computationally expensive, but only on the database server, whereas reading was simple and fast. I reasoned that reads were many times more frequent than writes, and biasing the system this way would optimise it for the common case, and avoid the need to build a cache invalidation system .

    This system actually had worked out pretty well in practice, at least for Accounting Web comments sections. Although it's conceptually neat, it's also actually pretty fucking dumb for a couple of reasons.

    1. updating records has a high overhead in PostgreSQL because of the mechanics of its concurrency implementation
    2. this system means that adding comments becomes linearly more expensive as threads grow in size. The more popular a system gets, the work needed to post an individual comment increases in a polynomial fashion

    Oops.

    I wasn't entirely stupid, I had calculated this downside, and I'd done some scaling calculations on paper to see what the cost of implementing this for the IMDb would be, and here I made my first actually stupid mistake, I used the metrics of the existing forum system to try and predict the capacity of the new one. I can't remember the exact numbers now, and I've long misplaced the notebooks, but it was something lower than a thousand posts a day, and the average thread length was a few dozen posts. Amazon could afford a useful database server, and it seemed like I easily had a couple of orders of magnitude of headroom. Telling myself that premature optimisation was the root of all evil , and conveniently ignoring the fact that this design was literally entirely borne of an optimisation hack, I decided to proceed with this scheme.

    Show me all the blueprints

    I gave the design a lot of thought. I had been a USENET user back in the glory days before spam and binaries had rendered it toxically uninhabitable. I adored slashdot. I'd used a lot of shitty web forums since then, and I had designed a flexible engine that could handle any kind of post based discussion grouping. I thought this was a great opportunity to design a discussion system that I'd want to use myself. scratch your own itch . I think I already mentioned, I didn't really have much idea what I was getting myself into. Ah, youth.

    I thought that most of the grief and spam I'd seen in other systems, was primarily because of the cheapness and disposability of user identity. I figured we could tie that down by disallowing anonymous posts, which was aligned with the goal of increasing user registrations already - maybe ultimately we could link them into amazon.com accounts, and therefore real identities. I wanted to give the users the ability to personalise and curate their site home page, so they'd have an investment in a community they valued, and would be publicly accountable to.

    Another thing I'd noted about other forums was how quickly they stagnated into a dominant clique, and deterred new joiners. I decided this was in part because of the permanent record; the conversations got stale because everything had already been said, and the groups then tended to be dominated by handfuls of high-status members with visible post-history. Groupthink dominates, outsiders are shunned, filter-bubbles prevail. I thought that an interesting solution to this would be to actively expire user posts. IMDb already had a system of user reviews for more static user content attached to database entries. The boards were for conversation - so we'd just periodically remove older content, and make no secret about it. This should stop the entropy lock-down, and also give us a mechanism to keep a lid on the database / thread size to help with performance. Everything should stay fresh and sparkling and self-rejuvenate.

    I know lots of this was naive thinking and with 2017 hindsight, it's easy to see the flaws. In 2001 though, there was much less experience of online community management. We thought we knew about trolling, because we'd experienced previous communities, but I don't think anyone yet had a handle on the scale and the scope of it in a significantly mass-medium consumer Internet.

    I really wanted nested threading, which is a very good, perhaps too good, way to promote reply-oriented posting and reading. For that same reason, I didn't want threading to be the implied default mode, because I thought it promoted point-by-point refutation, which lead to arguments and flame-wars. So I envisioned a system that could seamlessly move between a flat or a nested view, with a cookie to fix it to your individual preference.

    Each post would have two actions - a new top level post in the thread, or a reply to the particular post, and the different view options would allow you to see how the thread timeline fitted together from each point of view. I felt this would encourage replies, without mandating them as the only form of discourse. This meant that the organisational system was topic ( either a generic, or a database object ) , consisting of a thread - which was defined by the opening post made by any user at the topic level. This then collected numerous replies, which themselves could have sub-threads of reply.

    Mindful of the fact that this was still an era of expensive and slow dial-up and low end computers, I wanted the ability to view in narrow or expanded views. I didn't want to force people to download gigantic pages of browser and modem-choking deeply-nested table layouts, so we would flip between outline and expanded views as well as flat or nested. I wanted people to have a static, but customisable home page that they could add content, style and flair to; hoping to give them a sense of curation and ownership and identity, that should help act as a brake on too much antisocial or negative behaviour. I'm not sure I was even smart enough to wonder if people would use their home page to host offensive content. (Of course, some did).

    So I started to build it. Initially it went really well. On the data model and storage engine side of things, I was on a pretty solid footing, it was familiar ground. I carried on using PostgreSQL, and we specified a decent (for the times) server to host it on. No H/A or replication at all. I'm shocked at that idea now, but at the time I had reasoned that we were building an ancillary, purposefully ephemeral side-car discussion system with a different storage layer to the main site, and we'd be fine with regular hot backups - in the case of disaster we could shut them down without affecting the main site, and restore from backup. In the case of total and utter catastrophe, we could just reset them to zero and start again, they weren't designed for permanence anyway. Feedback about the design and features from the rest of the team was positive, with plenty of enhancements and suggested tweaks, and the system started to take shape.

    The UX layer was way harder than I'd anticipated, and because of this, I started to get a bit bogged down in the 'second 90%' of the first deliverable. The mayhem engine that the team had built (a really clever piece of software design, that I don't really have time to give justice to here) had never yet really had to cater to highly dynamic pages - it's core purpose was to serve flexible views of an almost read-only statically compiled dataset of movies and people. It was originally built around doing that in a particularly optimised way.

    I had to build up my own HTTP POST and form handling layers that would integrate with the existing HTTP handlers, from a somewhat lower starting point than I was used to doing, and this soaked up quite a bit of testing and debugging time. Even worse was the display code. We didn't really have much facility for dynamic page layout in the templating system - which was both highly customised, and complex; the site page templates were used to drive the static build system, via a custom compiler - the markup in the template specified what data views would be generated by the build, which directed the data builders that compiled the binary movie database- the pages were effectively just compiled to a stub handler for a specific route which would seek to the object index in a particular data index, and then basically sprintf the data out port 80 as a hydrated web page. This was a fast way to serve varying pages with identical structure, but not immediately well suited to highly adaptive constantly updated live pages or submission forms. Still, I wanted the boards system in the existing stack as well as I could manage, and so I laboured to build the missing features into my system in a way that could integrate well, which involved at least one complete abandoning and rewriting of the internal API.

    The actual boards display templates themselves were a significant time soak. We had a great designer, who took my ugly box tables prototype output, and turned out nice looking blueprint designs for all the various view modes and forms as static web pages. This was of course the era of the browser wars, and we were expected to support a bewildering array of user agents from the Netscape 3.x era onward, inclusive of weird-ass things like AOL clients and MSN web-tv set top boxes and goodness knows what else...

    Busting these intricate table-based views apart and back together again into a cryptic markup and logic language, adding the various ( session global ) mode flags such that all the different view combinations rendered as functional pages that degraded gracefully took me weeks . I was slipping past shipping dates and entering a terrible crunch death-march to just try and get something out of the door. Unhelpfully, this was all happening at a time when I was having a few strains in my family life, and also struggling a little bit to balance this into a sensible routine of working from home, I was ping-ponging between getting distracted away from 9-5 and then overcompensating by working across nights and weekends. Eventually we had to pull out features to ship.

    I drastically cut back the home page customisation, abandoned all the planned but unstarted work for a search index, and only had time to add the most rudimentary admin features. I had wanted to migrate the existing posts across to the new system, but I'd not even begun to start on that, and that also hit the cutting room floor. With a lot of assistance from the rest of the tech team to get it over the line, we hit publish on the initial TNG boards system some time in the summer of 2002, later than planned by some months. This pattern of the message boards being more work than expected for all parties that touched them would be the prevailing tone for the next several years.

    A test designed to provoke an emotional response

    User feedback was immediately negative, and highly vocal. Lobbying started instantly for the reinstating of the previous system. People complained about the new designs, the complexity of the new display options, the inevitable launch bugs. I was silly enough to join in the conversation to help explain the launch and solicit feedback, and from that point on I had an onslaught of direct contact messages and emails, occasionally positive and friendly, but more often than not weird and offensive, sometimes abusive. You do try to tell yourself that you can just ignore the trolls, but in truth it is quite difficult to remain completely unaffected by emails that compare you to a child rapist and calling for your death in offensive terms, even if it was only provoked by you breaking a font size in a particular version of Internet Explorer 3 . You never quite get used to that, I find. I was pretty crestfallen with all the negativity after all that work, although the team were positive and assured me that some of the board users could be like that, and that in general people are more vocal when they're complaining, and are naturally somewhat resistant to change. I still felt pretty down.

    My mood did soon change after a few weeks. The new boards were kind of a hit. Maybe a smash hit . They quickly overshot my scribbled calculations of scale in a slightly worrying manner. With some judicious database tuning, the performance stayed OK though. For now. Then we added links from every title page (IMDb pages were sub-grouped into title pages, for tv shows and movies identified by a key called a tconst which looked like tt1234567 and name pages, for people, robots, animals etc. from cast and crew which were identified by a key called an nconst which look like nm1234567 ; top level boards un-linked to other database objects therefore got a new key type called a bdconst , somewhat inconsistently, these looked like bd1234567 and didn't matter very much because there was only ever a few dozen standalone boards) and the numbers started to properly hockey stick .

    At the time we used to compute the page views in a weekly report which broke out the top N subsections according to first level directory. We never shared traffic numbers publicly, and so even after all this time I will be respectfully coy, but the highest chart topping positions were obviously things like /title, /name, /search /news /chart etc. At launch, the boards were lurking down the bottom, nowhere to be seen, but after we started the title conversations they were solidly into the top five, where they remained with ever-accumulating numbers, and user registrations clocking up correspondingly.

    From that point on, I spent a significant amount of my waking life 'doing the boards' for the next several years. Initially I was scrambling to put in the missing features we'd pulled before launch - post editing, markup for posts and then profiles in a hand-rolled version of BBCode ; again with a stupid insistence on display time optimisation, I converted this to HTML at write time, which meant that when we added post editing, I had to backward parse the HTML back into bbcode to be re-edited, all with a misconceived series of chained regular expressions . This lead to an endless sea of parse bugs that pretty much guaranteed that the markup and emoji (although they weren't called that yet, we called them 'smileys') set would be once fixed effectively sealed forever, even though I'd taken the trouble to add an admin edit tool, that allowed for updates to markup to be made by non-developers through the CMS API.

    I'd thrown together a naĂŻve search API, entirely based on un-indexed SQL substringing, which I'd fully intended to replace after launch. It never worked, and the system filled up so quickly that it killed the page cache entirely by constantly table scanning the texts, so much so that I spiked it in the first week, and never got a chance to work on it's planned replacement. I was still getting emails complaining about that five years later after I'd left.

    With the surging popularity, came increasing amounts of negative user behaviour, and I had to increasingly devote development time to adding abuse processing tools for our small moderation team, onto what had only ever been an afterthought of an admin system. We never proceeded to link up the user accounts to amazon accounts, and I'd never planned to add user-driven moderation. My quixotic hopes for user killfiles (renamed to 'ignore lists', which is a far better and kinder name), global killfiles (known as the ' Phantom Zone ', because I love Superman ) with account history purging and deletion weren't enough on their own, and the tooling for processing abuse reports were too clunky and slow, largely because I hadn't planned enough for them from the offset.

    I was now fighting a constant war on two fronts. With the popularity of the system way beyond my original estimate of a few thousand posts a day. We quickly escalated to a point where the really popular off-topic boards were ersatz real-time chatrooms, accepting hundreds of posts a second at peak-times. All of this in a cursor-pooled synchronously blocking database directly attached to the HTTP display servers. I spent a great deal of my work time just constantly rewriting sections of it all to squeeze efficiency out of this setup. First with indexes and schema changes, then with hardware upgrades and tuned and profiled system software, then with a complete rewrite of all of the database logic to use stored procedures , and finally a long overdue table sharding so we could cluster boards between different tables and tablespaces to balance the IO and garbage loads. At the same time on the other front we were trying to come up with ways to lower the proportionally increasing cost of trolling and abuse.

    My partner was temporarily stationed away in London by this point, so I was home alone, aside from the dog . Workdays at this point quite often consisted of walking 12 paces from the bedroom, still brushing my teeth at about 09:30, getting a support email, starting to poke at something interesting with the boards, and then not giving up until the small hours of the next morning. I was fairly obsessed with all of it, and my health was suffering, although I was too close to all of it to properly see this at the time. I developed a weird collection of neurological symptoms which stubbornly refused diagnosis, and subsequently appear to have been entirely stress-induced.

    We still were choking out at peak load times, and it was starting to have a knock-on effect to the rest of the site. Eventually, a super-talented colleague helped me out by implementing a workable version of my poorly articulated designs for a caching database proxy; implemented seemingly overnight by him in C, it spoke postgresql wire protocol and cached result sets in a filesystem that we mounted on ramdisk. Kind of a home-brewed combination of memcached and pgbouncer . The simplicity and effectiveness of this just took my breath away, just as much the lesson that if a software thing doesn't exist, you can just make it yourself. Everything is just ones and zeroes, as I am very fond of saying to this day.

    With this addition we got to a place where the system was in enough of a steady state. We implemented more banning and reporting, added a reputation score based system that slowed the rate of posting for users with lower reputation scores, which also helped reduce the saturation write loads at peak. Eventually we added an automatic moderation robot with a learning capacity and pluggable rulesets. I called him Spike . He worked fairly well, if a little bluntly at some times.

    I hope I'm not giving out the impression here that it was all entirely negative. It was definitely a rollercoaster few years. Exhilarating, and also very entertaining. The boards were a living thing that had sprung out of nowhere, literally something I'd created in my spare bedroom. It sort of felt like a Pacific-Ocean sized colony of sea-monkeys eternally fizzing away with unexpected activity right there in my spare room.

    Although they were often frustrating, the users were also inspiring, and creative, and surprising, and occasionally pretty funny, even some of the (gentler) trolls. On top of an understandable level of frustration and annoyance, I generally found I felt a sense of sympathy for them, and their complaints and frustrations with the system. All of this was before the age of 'social media', and I could almost feel the shape of it hanging there, slightly beyond where we were heading, off-piste and in a direction we probably shouldn't venture into.

    A consistent surprise was the amount of effort people put into curating their limited patch of profile space, and how social and to us off-topic, it all was. We were constantly running into people trying to use the boards for personal social spaces - I argued for providing individual personal boards for every user at one point, but the management team explained that we weren't really in the core business of general social networking. It confused me at the time, and I had to think about it for a while, but I think that was correct thinking, and there's a lot of wisdom there. You simply can't do all possible things well. With a small team, and a big world, you benefit most from focusing entirely on the things you're best at and the things you want to be better at.

    A few of the sillier trolls stand out. There was one early griefer , who we very easily IP traced to a school library, I think based in Canada. We waited until he was in mid-session one afternoon, and then if I recall correctly, management called his head teacher, who was then able to apprehend them in the act. There was another, very silly catfish troller called tabitha_cyeg , with an obviously manufactured identity. Their M.O. was posting bizarre conspiracy theories about the site technology, and myself, during which they'd claimed to have hacked into using l33t -sounding but completely irrelevant NetBIOS vulnerabilities replete with faked server logs, and on one occasion 'hacked' emails from myself revealing my true name to be something along the lines of 'Claude M. Savoire'. Quite a few users were seemingly entirely convinced, but to me it was pythonesque.

    Getting contacted by the Feds to deal with users who'd been posting death threats about President Bush was weird , at least it was the first time, and I got a few PMs and emails from actual industry figures, which was always quite exciting. I personally banned a moderately famous Hollywood producer this one time, for abusive posting, which is something of a curiosity. I remember going to watch Jay and Silent Bob Strike Back at the cinema around this time frame, and getting a particular kick from the sub-plot where they individually visit all the internet forum posters who have been rude about their previous films.

    I watched people fight and friend. Saw a few romances and a marriage or two emerge from the regulars. I read, and occasionally got involved, against my better judgement, in fascinating and productive conversations. I still bump into people IN REAL LIFE who reminisce about the boards and are to this day impressed with me when I tell them I had a big hand in their genesis. I once spent an evening in a darkened restaurant patio overwhelmed to tears as a kind man explained to me his young daughter, hospital-bound and dying of cancer, had used the Harry Potter IMDb boards as her main social life in her last year, and how much that had meant to him and her. Stories like that are just a profound privilege to have had even the most tangential involvement in.

    And I learned so much. Working with such a smart team, on such a great and special piece of the internet. Learning about every aspect of scaling a web stack from the disk blocks up to the network and back down again. This era was still 32-bit Intel hardware, and I learned a huge amount about that, and UNIX profiling , and the linux virtual memory system and file system , and HTTP caching . I made so many mistakes, because there just wasn't any other way for me to learn, and I did figure out how to fix or improve on many of them.

    I learned about PostgreSQL internals from the wire protocols all the way down to the storage models in some detail, and to this day I'm a pretty great PostgreSQL DBA , when I need to be. I learned a lot about UX influence and steering behaviour, albeit by mostly getting it wrong. I learned about building search engines, and service orientated architecture, and why you really shouldn't hang responsive systems off of blocking I/O, and maybe message queues are useful. I learned how to measure system performance all the way down to the CPU cache level . I learned how to keep focused on problems I didn't yet know how to solve, or perhaps didn't yet understand. I learned lots and lots of things about movies and cinema history, much of it just by osmosis poring over the data sources. I learned how to better manage my own time and projects, and I learned what it feels like to burn out , and what you should do about it when you know that you are. Since I left Amazon.com, I've had a great and varied career , and I think at least 75% of the useful things I know how to do well I learned first-hand on that gig, and I've always treasured, and respected that.

    Always. Be. Closing.

    And now they're shutting the boards down. I first heard about it via text message, oddly enough; but shortly after that it was all over my news feeds followed by a slow stream of emails, checking in. Friends, ex-colleagues, some of them from former boards users. I felt an odd sense of shock about it, in a way, and slightly emotional. Sixteen years is a ridiculously long time in Internet years. The web itself wasn't sixteen years old when I joined Amazon, and nor was the even older still IMDb. I don't use the boards myself any more, although I do occasionally look over them, perhaps once or twice a year. It's been clear for a while that they're not getting a fraction of the use that they once did, and that's fair. The web is a different thing in 2017 entirely, and that's also a good thing.

    Communications technology evolves, and hopefully improves all the time. People have all kinds of social networking now for communicating, and the bulk of this is happening on different, smaller screens than anything I could have envisioned when I was first sketching out some pencil ideas in a gridded notebook. An actual Filofax I believe. It was very humbling to see the amount of twitter traffic noting the IMDb announcement, as well as the number of actual proper news sites that wrote this up as something significant. The Verge report seemed to think the IMDb message boards were era-defining. That's something, I guess. All things must pass.

    There's just one more thing that's bothering me

    ' Mjeyds '. On the imdb board bbcode syntax, there's a particular smiley that you markup using this bizarre word. People occasionally ask what the term means, and I've always enjoyed the mystery, being one of very few people in the world to have any claim to know the answer. I guess it's now or never for the reveal.

    The emoticon set was curated, uploaded and configured by my erstwhile designer colleague. He took responsibility for naming them. He wasn't English, hailing from Denmark, I believe via several other countries. When I pressed him for an explanation of 'mjeyds', he said it was supposed to be an onomatopoeic of the way the late Graham Chapman said a languorous 'yes' whilst sucking on a pipe in a scene from Monty Python's the meaning of life . If it is, I guess it works better if you're using a Danish alphabet? If you've got all this way through this post only to find out the answer to that question, then I am sorry if it is an anticlimax, but thank you for reading. Maybe some things are better left mysterious. Another lesson learned.

    Crazy Credits

    this is a personal web page, and an entirely personal and subjective retelling of my own experience building and maintaining a small section of IMDb.com a long time ago. Whilst I'm happy to take personal responsibility for a large amount of the boards creation and inspiration, I don't want people to get the impression that this was in any way a solo effort. All of the work outlined above was produced in the context of a small dedicated team, and although I've refrained from naming names, and attributing ideas elsewhere this is borne more from a desire not to miss anyone out - after this amount of time there's simply no way I can credit individuals for parts I can remember without failing to attribute others for equally important contributions I have forgotten. I've done my best to be honest about facts and timelines, and tried not to infer too much about third party motivations, but I know I've forgotten things and misremembered others. Working from memory, after this amount of time, such errors are only human. If you spot anything terribly wrong, or have any questions or corrections, please get in touch. I'd like to thank the entirety of the IMDb team 2001-2005 for working with me on all the aformentioned things, and more. Great team, great times

    posted by cms on
    tagged as
  8. And just like that we're back. What happened cms?

    It was never entirely my intention to go offline for such an extended hiatus. Even though the web is intrinsically brittle and ephemeral, I like to do my bit to keep my little backwater serving 200 OK s to the half-dozen people who stop by to check in regularly, and the couple of dozen who linked to something I put up at some point. It's basic web-citizenship as far as I'm concerned.

    Before we went fully dark, I'd not posted for a long time already . And before that I'd slowed my posting down to something of a crawl. I think there's a few reasons for that. It's easy to get bored with blogging for the sake of blogging, especially in our current age where everyone shares profligately across many social platforms . It's fairly common to see blogs that have fallen into a recursion of no posts for months, then a post apologising about that, and then further disuse. I don't think this is one of those, but the proof is in the posting I suppose.

    There's certainly been less time in real life for auxilliary pursuits like online rambling, and that's a big part of the reason. No time for any proper content posts, concomitant with a surge of alternative social platforms to play around with, meant it often seemed a bit redundant to post arrays of short-links , when I could just throw them up on twitter / adn / diaspora* / flickr / ello / imzy /whatever, with a bigger audience, and more interaction.

    I was also feeling a bit self-conscious about standing up in public. After leaving last.fm (fairly amicably, as these things go, fwiw, albeit with a slightly battered heart), which felt like a fairly visible shift sideways, I was quite deliberately courting more obscure, maybe more unexpected job roles, and I remember feeling like I really didn't want to bare my thoughts to the internet judgement machine whilst I wasn't even entirely sure what I was doing myself a good deal of the time. Also busy! Young family plus startups really left little time for anything much else.

    I also was really feeling the pain of Wordpress . I never quite managed to find an authoring approach to use with it that didn't make writing anything seem like far harder work than it ought to be, also because I always insist on self-hosting, the sheer weight of it for maintainence and security updates, and backups, and DBA -ing, and having to write PHP or perhaps even plugins to do the inevitable customisations someone like myself inevitably finds themselves suckered into doing. So Wordpress was a drag, which was feeding my reluctance to contribute much of substance. So I decided to pause on updating whilst, in true wannabe-hacker style, I whipped together some kind of alternative content publishing system.

    I'll just take a paragraph out to stress that I actually admire WordPress a great deal. It's a very sophisticated and flexible web platform, and a great choice for site management, in either managed or self-hosted configuration. It kept this site ticking along for years. It just isn't a particularly good fit for my requirements, which are extremely simple

    I thought about using another off-the-shelf blogging system, which would have been the sensible route, but I figured that would just lead to a similar frustrated stalemate. So I started to sketch out an application that would allow me to quickly fling out tagged and dated content without much overhead of hosting or writing. And I carried on intermittantly piecing this app together, often on trains, for a couple of years. As an exercise in procrastination, it worked out better than I expected, and I carried on posting short content to twitter and others, reasonably happy to continue to defer the responsibility.

    But then the site went dark. I was hosting it all on a linode instance. I've been a very enthusiastic linode user for perhaps ten or more years, I think they have an excellent product, offering well-provisioned VPS instances , inexpensively, with an easy to use management site. Generally I've been very happy with them to date.

    This changed somewhat last year, and my confidence deflated a little. There was an extended outage of service across linode in December 2015 , apparently as a result of a targetted DDOS . This lasted for many days, and the communications about it from linode were muted and suspiciously vague. This isn't really what I expect from a first-tier ISP. I came away with the impressions that there were some significant architectural problems with their infrastructure, probably from acrued technical debt , and potentially some exploitable vulnerabilities in their public facing application software . I decided it was time for a change.

    I did some reasearch and rented a couple of new hosts. This time I've gone for low end, physical servers. This represented another procrastination opportunity, because when I originally set up the beatworm.co.uk linodes, almost ten years ago, I just hand configured everything by remote shell. Now I like to use the ansible configuration management system to set up hosts, and I took this opportunity to port my public infrastructure across to use repeatable playbooks. This turned into another major yak-shave , because there was slightly more to it than just a WordPress deployment, I was hosting mail, calendars, media streaming, IM, DNS, the works. After getting lost in this tarpit for a couple of months, I decided to move the application tier over to use the playbooks from the sovereign project , which covers much of the same ground, but is already written, and uses more modern components. Of course it wasn't entirely straightforward to integrate these plays over my existing base provisioning, and I ran into a couple of glitches and gotchas with some of the choices they'd made for configuration, but it only took a couple of weekends worth of fiddling to get it all running in a fairly acceptable shape. I moved the DNS across, at which point the wordpress site was left behind, and everything went dark.

    I was surprised at how much this bothered me.

    I like an outlet for sharing things. I enjoy the idea of having a stable internet identity . I don't like the way the modern web has folded these ideas into a handful of consumer products run by just a couple of corporate gatekeepers. That's not the web I grew up with, and it's not the web I want to see either. A very loosely federated ecosystem of ad-hoc resources, all mixed together as hypermedia, aggregated and accessed via an assorted bag of user-agents. That's how it works best . I like to write, because I like the practice and discipline of working toward articulating my thoughts for a general reader.

    I like being able to curate an archive, and keep control over how that information persists and is presented. This is hard enough to do when you have primary jurisdiction over the medium and material (there is plenty of bitrot on view in my archive, particularly in the really old material, which has been migrated across multiple publishing platforms now), and basically impossible if you're relying on a third party service, which periodically re-invents itself to better serve it's own objectives, which are only ever to be tangentally aligned with your own, at best.

    I don't like the sense of obligation I get from formal social media platforms. There's a subliminal sense of pressure to perform, to update, to observe the conventions, to consider and measure the implied audience. I'm not a joiner by nature. I just end up gently resenting the throng. I like to feel like I have a voice, but I don't want, or even expect to reach, an automatically provided audience.

    So, I picked back up my now-neglected website platform experiment, and knocked it together enough to get an MVP out of the door. It serves HTML over HTTP. It has a relatively minimal set of style rules that should allow it to work gracefully across various screen dimensions. It has rudimentary support for RSS ( not that many people use newsreaders any more ). It's simple to run in a staging environment, and I can write posts in plain text in emacs , and edit and post them without much extra grief. It's only got about 22% of the functionality I had originally planned, but I feel the urge to ship it, use it, and hopefully I'll refine it in production.

    There's a couple of interesting quirks to this new hosting setup. It's an ARM -based micro-blade, hosted on a scaleways C1 . The blogging software is semi-static , in as much as it serves generated content from the filesystem. It's written in common lisp , and deployed in a different lisp to the one it's developed on There's no frameworks (aside from using zurb foundation classes to base the CSS). There's no database. There's no comments, because I haven't yet decided on a productive way to support them.

    posted by cms on
    tagged as
  9. I already mentioned in passing, St. Vincent , the band-shaped solo project brand thing of the super-engaging Annie Clark, was by far the best act I saw at Primavera Sound 2014. It was also the act I was most looking forward to seeing going in, it’s always nice when those line up.


    I guess I’m a super-fan. I first spotted Annie playing with Sufjan Stevens ' touring band. I next encountered her playing solo support for the National , touring her first St. Vincent release , upon which occasion I bolted out of the auditorium by the third song, in order to make sure I got a copy of the CD she was plugging from the merch stall before she packed away. I saw another couple of shows in Bristol, with the full band, and bought all the records, including an interesting collaboration with David Byrne .


    Last weekend, while idly browsing the Glastonbury live blog, I noticed that they’d just updated their description of the current iPlayer feeds to include St. Vincent streaming on the iPlayer from the park stage. I’d been avoiding the Glastonbury video feeds due to a combination of not being in the mood, and the dullness of the tv schedules, but I wasn’t going to miss out on this, so I whacked it on the TV. True to form, it was a great set, live, risky, and peppered with amusing crowd-surfing and hat theft . Even with a bit of sound problem, and some streaming glitches I enjoyed myself, and was amused to see my enthusiastic tweeting duly included in the Guardian live feed on the next page refresh.


    “ That was a really good set ”, I thought to myself, afterwards, “ but it wasn’t nearly as exciting as the Barcelona one. True, that lacked crowd invasions, and nobody lost a hat, but the lighting, and the sound, and the staging, and the lack of daylight, and the crowd being really into it
A pity there’s no TV-broadcast quality stream of that night archived away somewhere ”. 


    Yes, I do really talk to myself like that sometimes. Especially when I’m pretending to transcribe my inner voice for a blog.


    And then, I ran into this on Youtube.


     


    Full set, multiple cameras, properly mixed sound, pretty good video quality. I have not yet watched it enough times to see if I can see myself ( front of house, stage left, VIP pen ) in the crowd, but I expect I will. 

    posted by cms on
    tagged as
  10. There’s been a little flurry of le CarrĂ© activity in the British press this week , following on from release of MI5 archive files that indicate that an MI5 agent, known as Jack King ran a network of UK nazi collaborators during WWII. Highly fortunate timing for the British spooking establishment to garner some positive press, some might say. The last couple of months the news reports about them have mostly been about illegal mass surveillance techniques attempting to record and analyze all internet traffic at source , and creepy write ups of mass automated collation of private video chats . Some of them intended to be particularly private , no doubt.


    Journalists had a bit of fun trying to retrospectively finger the real Jack King . The Telegraph decided King was probably John Bingham, Lord Clanmorris , whose name is usually mentioned in passing in press stories about ‘le Carré’, itself a pen-name for David Cornwell , who often mentions that Bingham is one of the component inspirations behind his super-famous fictional master spycatcher, George Smiley. The Telegraph also span off an article about Bingham’s sense of disapproval of his protĂ©gĂ©'s literary exploits . Mr Cornwell, writing under his given name, sent in a marvellously succinct letter by way of reply.



    Bingham was of one generation, and I of another. Where Bingham believed that uncritical love of the Secret Services was synonymous with love of country, I came to believe that such love should be examined. And that, without such vigilance, our Secret Services could in certain circumstances become as much of a peril to our democracy as their supposed enemies.John Bingham may indeed have detested this notion. I equally detest the notion that our spies are uniformly immaculate, omniscient and beyond the vulgar criticism of those who not only pay for their existence, but on occasion are taken to war on the strength of concocted intelligence



    Navigating around the little flurry of reportage about this little back and forth, I found this engrossing older Q&A with le CarrĂ© , from the Paris Review , held at the time of the US publication of “ The Tailor Of Panama ”, back in the late 1990s. It is a marvellous read, concerning the mechanics, circumstances and techniques of his fictional writing, and touches into politics. This quote leapt off the page at me.



    My definition of a decent society is one that first of all takes care of its losers, and protects its weak.



    Quite. He’s quite a writer, that Mr. le CarrĂ©. If all you know of his work are the mostly excellent TV and motion picture adaptations of his more famous works, you might do yourself a favour, and read a few of the source novels . They work best tackled in publication order.

    posted by cms on
    tagged as
  11. Top Skaters


    This is what my final day at last.fm looked like.


    In the morning, this.



    Last.fm 720° team


    <p> In the evening, this.<p>

    Yes, I'm working on getting a MAME cab smuggled into Moonfruit.

    posted by cms on
    tagged as
  12. A-list iOS developer shop Tapbots today released a remix of their excellent twitter client ( Tweetbot ), focused on tiny pay-subscription social network platform app.net . I think Tweetbot is probably my favourite thing about my  iPhone, and so I immediately purchased it. No obvious disappointments, all the slick performance I like is there, and it brings across some features I've been lacking in ADN for a while, like the ability to swiftly upload photos. I promptly celebrated by taking photos of every last.fm staff member with an ADN I could track down . I think this will probably increase my use of ADN moderately. Mobile is an essential component of gathering the off-the-cuff asynchronous status updates a service like this is built upon.


    I'm not sure that it will gigantically increase my engagement with ADN alpha. I was a bit suspicious of all the frothy cliques, with an intangible unease that I struggled to define, at least until I suddenly realised it was a cogent reminder of the very earliest days of bootstrapping the IMDb message boards . That left me feeling more comfortable with what the thing was, but no more inspired to engage. I'm still in love with the idea and the ideals of the place, and I'm reasonably confident it hasn't yet fallen into it's proper, more useful place. I'm shallow enough to enjoy my sexy low user id on some level that even I don't properly understand.


    Has App Dot Net "arrived?". I think not yet. Netbot feels like a threshold event of some kind, in as much as serious developers are prepared to put enough effort into the ADN platform to produce fully realised software harnessed to it, and this degree of finish does not come cheap. ADN seems to be on a little draught of second wind recently, there's been a couple of fun toy apps, some positive press, and the recent price drop, bringing a wave of fresh users in. I'm still very positive about ADN as a concept, an indicator that there's now a long tail of internet folk interested enough in paying for stuff to make services like this potentially viable. I won't be really  excited about ADN until I see the first compelling application built over it that is some mostly new and useful thing, rather than a new skin on an old one.

    posted by cms on
    tagged as
  13. It's not exactly the done thing on today's web, but I'm a huge believer in paying for web services. I've never been comfortable with the ad-supported web. When pure advertising is the only revenue stream supporting a product or service I worry about the deleterious effect upon that product or service.


    I don't like the implication that they're really working for their sponsor's interests ahead of mine. I don't like the mental effort of hunting down all the opt-outs, of second-guessing potential consequences of the creepy data-mining and covert information sharing with networks of 'trusted partners'. More straightforwardly, for many cases, I suspect the numbers don't really balance; I find it difficult to rely heavily on something with a potentially precarious revenue stream. I don't want to push too much content into, or build infrastructure around things that won't necessarily be around in a year or two.


    Paying directly for things makes everything seem more explicit and straightforward. I'm the customer. I can make informed decisions about the cost and usefulness of the thing. It's in the better interests of the service provider not to abuse the relationship. A product unspoilt and unhindered by commercial marriages should stand a better chance of evolving towards it's essential form. So I'm a relatively easy sell as a consumer. Offer me a useful service, at a reasonable price, and I'm quite likely to pay you for it. 


    The flipside of this is that I'm really cautious about the reverse. Purely ad-supported sites, especially ones that seem to be offering far too much  for free without being noticeably saturated with advertising make me feel slightly paranoid. I like to see which way the money flows.


    Here's a list of the sort of internety things I currently pay for, and will happily endorse. 



    • Spotify - I'm a long-time tenner a month customer. I think it's too expensive, but I somehow never quite unsubscribe.

    • Flickr - I have a pro account for photo hosting. 

    • DynDNS - I have a paid account, which gets me DNS zone hosting as well as a dynamic hostname

    • Pinboard.in - I like this bookmarking service. I was a very early adopter, and therefore my account cost a pittance due to the unique way pinboard is funded. 

    • Lastpass - I like this service so much I subscribed, just to do my bit to ensure they stay in business

    • Linode - my internet hosts are linux virtual machines hosted with this service. Linode is excellent. 

    • Word Podcast : I subscribed to the (now sadly folded) Word Magazine, primarily to access their very enjoyable podcast.

    • Metafilter : I don't use this site very much any more, but back in the old days, I got so much surfing out of it, I eventually bought a paid account just to contribute back.

    • Reddit : Similarly, I bought a founder Reddit Gold account when they appealed for cash, because I really enjoyed Reddit back before the eternal September.

    • iTunes : I use iTunes for quite a lot of things, apps, movie rentals and purchases, music purchases, and I have an iTunes Match subscription. If you have enough Apple gear to make an 'ecosystem', it's a good service.

    • Amazon Prime : I love Amazon. Some days, I wish I still worked for them.

    • Netflix : Most of my TV watching these days is netflix via Apple TV

    • App.net : - I signed up for an app.net account the second I heard about it.


    It's not a huge list. I'd like it to be larger. There's whole categories of things I'd probably cheerfully pay for should they exist. I'd pay a subscription for a decent search engine that wasn't a front for a creepy advertising juggernaut. I might pay for a subscription 'social' network, maybe something like a family-focused Yammer . I'd love something like a cheaper netflix that just focused on pre-1960s movies and archive TV. I'd like something like the old programming.reddit or hacker news. I'd love a smart news aggregator, and if I can't find one to pay for soon, I may have to invent one.

     

    In the olden times, there was a lot of talk about internet micropayments , and about how they couldn't possibly work, or how they were imminent and essential to safeguard the future of the web . They never really quite happened, and the shiny allure of the internet as a huge content pipe of free everything triumphed over all, but lately it feels to me like the mood is perhaps shifting a little.

     

    People seem to be wising up to some of the privacy considerations of infinitely free stuff that is only ever paid for covertly. The mobile app store culture has engendered a user community more acclimatised to fee-paying for services. Kindle is powering a minor revolution in self-publishing . Finally, there's Kickstarter , which is perhaps the most interesting current development in internet financing.

     

    There's nothing particularly new about the thinking behind Kickstarter. Through a combination of great execution and timing, it seems to have hit critical mass over the last 12 months. In the midst of all the long-tail nerd-bait (I recently signed on for my first funding )  and snake oil there are signs of some interesting funding efforts converging towards the mainstream. Champion self-publicist Amanda Palmer recently powered her project past the magical $1,000,000 mark, to flurries of 'old media' press interest.

     

    App.net is a manifest demonstration that I'm not completely alone in this line of thinking. Launched slightly before  twitter's recent frantic, shark-jumping, repositioning of it's terms of service , it seemed a futile, quixotic gesture when I signed up to fund it on it's kickstarter-esque ( apparently kickstarter's TOS precludes funding things like ongoing businesses, so they rolled their own thing ) signup page . I fully expected it to fall short of it's goal, but maybe pick up some positive news coverage as it flamed out, much like Diaspora did before. To my surprise it charged past the funding target ahead of the deadline, and closed way ahead of the target figure. Since then, they've launched the API, and built a sort of twitter clone built across it at alpha.app.net , which is busy enough to be an almost useful, slightly cliquey chit-chat network of it's own. It seems like app.net has the potential to self-host itself as at least a niche social network for privacy nerds and web developers. For some, that might be good enough, but I suspect the real power of app.net lies within it's potential to become a kind of ad-hoc real-time message bus for higher layered services over it's API. It remains to be seen if it can gather enough developer / user mindshare to deliver on the potential.

     

    The most high-profile campaign I've yet seen is the Penny Arcade Sells Out . High profile, high traffic funny-picture sites are the gold-standard of high volume ad serving, with content that massive audiences enjoy, but are used to reading for "free".  Although they fell short of their more extravagant targets, including the 'complete ad removal', they hit their funding target, and raised half a million dollars. An A-lister website demonstrating the ability to generate competitive income with top level ad-sales entirely from direct user funding? Nearly. Is the tide turning? I don't know, but I can feel it pull.

     
    posted by cms on
    tagged as
  14. tee hee hee

    elfm.el is a rudimentary last.fm radio client implemented within emacs lisp. I wrote this at work to present at our internal "Radio Hackday"; dedicated to encouraging staff to experiment with the radio services and API , and make something with them in a day and a half for show-and-tell. Kind of 20% time distilled right down to an essence.


    I wasn't sure if I was going to have enough time to contribute anything, so I wanted to focus on something I could hack on by myself, because I didn't want to hold a team back if I got called away. So I picked something jokey, inessential, yet hopefully thought-provoking, as per my usual idiom.


    I had a real blast participating. I don't usually get time to attend things like proper hack days, being all old and family-bound. I really enjoyed the atmosphere of inspiration and industry. All the other hacks were amazing, and waiting for my turn to demo I felt quite embarrassed about my stupid cryptic toy, but it worked perfectly in the spotlight. I got almost all the laughs, and all of the bemusement I was aiming for.


    The code is here . It is awful. I haven't written any coherent lisp on this scale for many years. It uses too many global variables and special buffers. It doesn't scrobble. I had to rewrite all my planned asychronous network event machine halfway through implementation, when I re-discovered the lack of lexical closures in elisp. ( I've been reading too many common lisp books in the interim, I suspect ). I think there's enough of the germ of a useful idea in there that I might just clean it up and try and extend it into a proper thing.


    I built and run it using GNU Emacs 23.4.1 . I used an external library for HTTP POST , which I found on emacswiki ( HTTP GET I glued together using the built in URL libraries). I've also put a copy of the version I used in the distribution directory. I used mpg123 for mp3 playback, which I installed using Mac Ports . The path to mpg123 is hardcoded in the lisp somewhere, probably inside play-playlist-mpg123.


    Here's my demo script, which I evaluated in a scratch buffer. Evaluating these forms in sequence will authorise the application, tune in the radio, and then fetch a playlist of five tracks and start playing them.


     ;;;; -----DEMO , this example code is out of date, see README 

     ; will open a browser to authorise application

     (authenticate-app) 

     ; authenticate a user session

     (start-user-session) 

     ; tune the radio to this URL

     (radio-tune "lastfm://user/colins/library/") 

     ; refresh the playlist 

     (get-request (get-playlist-url)) 

     ; filter the playlist response to sexps, play the list

     (play-playlist-mpg123 (reduce-playlist)) 

    There is only one playback control at the moment; stop, which you can manage by killing the buffer lastfm-radio which has the playback process attached to it.  You can retune the radio with any lastfm:// URL format ,  by re-evaluating radio-tune, and then refreshing and playing the playlist i.e. repeating the last three steps in sequence.


     The internal hackday was a cracking idea. Most of the hacks were focused around radio enhancements with broad-ranging appeal, the vast majority of them looked practically useful. I suspect most of the work will filter out into site and product updates. In addition to this, and perhaps more valuably, it worked really well as a community exercise, evolving knowledge-sharing, cross-team working, and enthusiasm, and converting them into inspiration, craft, and art. More of this sort of thing, everywhere!


    Updated



    I've iterated on the original hack quite a lot to make it slightly less brain-damaged, and a bit cleaner to import into anyone else's emacs. Updated code is here and so is a README file with updated running instructions. It's still not really in a usable state for anyone else, but it's amusing me to fiddle with it, and I vaguely plan to get it to a releasable alpha state, at which point I will publish a repository.

    posted by cms on
    tagged as
  15. My friend Jim won 15 quid by solving the New Scientist Enigma Puzzle. The really neat thing is he did it 32 years after the fact. Read all about it here , in his own words.


    Would anybody with a working BBC like to contribute a real world run time for his BBC BASIC based solution?


    Jim runs the Enigmatic Code blog about his hobby of solving New Scientist's Enigma puzzles using short python programs, which anyone can play along with at home.

    posted by cms on
    tagged as
  16. I was churlishly unimpressed by the iTunes "12 days" Christmas promotion this year. However whilst subsequently browsing the iTunes Store home page I did find one app that impressed me enough to blog about.


    There's a store section called " Apps Starter Kit " which lists a dozen or so applications that Apple are promoting as "must have" installs for new iOS users. I installed a handful of these to my iPhone 3GS, but the one that has most impressed me so far is the iOS edition of DragonDictate .


    It's a "split brain" app, by which I mean it uses "the cloud" to perform the text-to-speech conversion. So far I have been quite impressed with the accuracy of the process, in fact I have created this blog post by dictating while walking the dog, with just a little editing afterwards for tidy up and to add hyperlinks. I suppose it is a little like a poor man's edition of Siri, minus the pretend A.I. and the search and reminders integration.


    You can get text by dictating into a text box within the application and there is a quick menu of options that allow you to create an SMS or an e-mail or copy the text to the system clipboard easily for use in other applications. This collaboration isn't too clunky and although dictating text into your phone is a little stilted it doesn't seem to be significantly less effective than my relatively crappy typing on the iPhone on-screen keyboard.


    The app was free, presumably it's intended as a promotional device to introduce users to the Dragon family of software applications. Obviously there are some privacy concerns raised by having the voice processing performed on a remote server, but the terms and conditions include a privacy policy which guarantees to preserve your anonymity and keep your data private. The application did even prompted me to ask if I wanted all of my contact names uploaded to the remote service for greater the use of name recognition, and took pains to explain that this would only include name fields from my contacts database and no other personally identifying information or contact details.


    I am not sure I would make a habit of using it for writing long articles or even blog posts like this but I think it could prove to be quite useful for such purposes as short e-mail replies or even sending SMS messages in situations where it's inconvenient to type.


     

    posted by cms on
    tagged as
  17. According to wikipedia, the term " Churnalism " was first coined by a BBC journalist. I think they may still have journalists working there.


    See how many items of product placement you can see in this proud piece of presumably PR-led "pop sci" about smart vending machines . I found it, prominently linked, on the BBC news home page on Boxing Day. The entire notion has a whiff that classic of white elephant puffery from the old school  the internet fridge about it.


    I don't know if I'm alone in finding this sort of thing repellant. The motivation to whip up this kind of nearly content-free guff into page length pieces must come from somewhere, which means a degree of specific intent. There's the skeleton of an interesting piece on mechanical learning and commercial interests buried in there somewhere, but I find it difficult to read when I keep being stabbed in the eyes by blatant marketing copy, much of which I uncharitably suspect of being pasted in directly from the source press-release. The focus of the piece ought to be on the science, perhaps some of the biometrics and algorithms supporting the interesting sounding  audience impression metric (AIM) software , but that's given a throwaway mention; instead the article's centre of gravity seems distorted to orbit around some recently launched consumer products, with little depth of story. Weird details leave unanswered questions hanging. In what way is a new Jell-O SKU "Just for adults" to the extent that it requires a screening interview by femputer ? Titillating teaser questions like this are familiar marketing devices used to capture and exploit base curiosity, but seem out of place in a news piece without any resolution. How does the system handle adults whose body shape diverges strongly from their defined four age brackets? What the merry heck is a  general manager of personal solutions anyway?


    I gave up counting the product placement incidents after the first couple of paragraphs. Only someone with intimate knowledge of the BBC house style rules would know just how many direct repetitions of the properly capitalized brand names Kraft and Intel are strictly necessary, but there seem to be an awful lot of them littering the piece. There's a lovely Intel i7 box graphic three-quarters of the way down the piece; it seems to me only tangentally related to the story, yet conveniently re-uses the branding iconography supporting their current consumer-targetted CPU line.


    Like many a British license-fee payer, I have a peculiar, combative slightly proprietorial relationship with the BBC; being in some weird sense a stake-holder in this unique broadcasting organisation; pride mingles with a misplace sense of ownership, disappointment tangles with admiration. Once upon a time I viewed their web initiatives as exemplary, inspirational and essential. These days they seem increasingly overcooked, irrelevant, and misguided.

    I realise, in a sense, I'm a grumpy old man ranting at the telly, but I think this tapering off of content quality provided by BBC online is a real thing. If so, a really worrying trend; added to this we have an effectively Conservative administration, who I'm sure would love to see the BBC, already in retreat, broken up further. Spreading out the more lucrative parts of the special quasi-monopoly, to their chums in commercial broadcasting whilst binning even more of the less lucrative parts in the name of austerity would fit in well with their principles of government.<p>
    posted by cms on
    tagged as