1. I've been using a set of superman covers I scraped from Superdickery.com  as a screensaver on my Mac for a couple of years. I just dropped them all in a folder, and pointed the built in "slideshow" saver at it. Set to "Shifting Tiles" with 'shuffle slide order' it makes a nice regular grid of comic books that zip in and out regularly.


    Last week I had a notion. I dusted off my old Canon LiDE A4 USB scanner , fired up VUEScan and set about scanning a couple of boxes of my own comic book collection. It was a suprisingly therapeutic couple of hours mechanical work to scan a few hundred, and the result is a more pleasingly personalised slideshow, with a larger number of member images. 


    After running with it for a couple of days, I'm really pleased with the results. It could do with a little more variety, because I scanned from boxes where the material was alphabetically organised by titles (what am I, some kind of nerd?). Some other observations - the 90s were really dark, both in the stupid post-Watchmen 'gritty heroism' sense, but also more literally in the colour palettes. This is really obvious contrasted against the four poster colour silliness of the classic Super titles I've switched from. Ironic that high grade reproduction technology and digital colouring options, as well as the shift to fully painted illustrations seems to have lead to a more muted spectrum of offerings. Perhaps this says a little about my youthful tastes. Also, what was I thinking sticking with that second run of ' Mage ' ("The Hero Defined"). That book was pretty terrible as I recall, and I've certainly got no urge to reread and check my assumptions. I'm leaving them in the set, because it seems dishonest not to. 


    I've got another dozen or so boxes to scan. I should do some sums to work out what the storage implications of that represents before I commit to bunging the rest of them on my 256GB SSD though. 


    As a side thought, I realised that everpix had diligently uploaded all my scan jpgs, so I can present a  public gallery of the work so far for your bemusement.

    posted by cms on
    tagged as
  2. If you have a Mac, and you use Terminal.app to run UNIX commands, try executing this for a cool shell prompt


     export PS1="\360\237\220\232 $ "

    See what I did there?


    If you are using a UTF-8 encoding for your terminal, which you probably are, and if you're using a recent OS X, and have the right fonts installed, which you probably do, you should have a little sea-shell graphic for your prompt. Literally a cool shell prompt.


    Screen Shot 2013 04 09 at 19 11 42


    In a recent revision to Unicode , code points were assigned for many emoji. Emoji-what-now? These are little emoticon glyphs that rose to popularity in Japan . Apple have included a nice typeface with full colour icons for a subset of these in the last couple of releases of both iOS and OS X, so you can use them in most applications that use the system type rendering library, like Messages. On OS X, this includes the bundled Terminal.app terminal emulator. So you can print little icons in your shell, if you know an encoding for a particular glyph.


    Here's the ever popular 'pile of poo' (  U+1F4A9


    Screen Shot 2013 04 09 at 20 09 46


     


    Not sure what that is supposed to be used for, but it's terribly popular on the internet. "But how", I hear you ask, "do you find out the encoding sequences for these appealing novelties?"


    Well, you can search for unicode code tables on the internet. On the Mac though, the easiest thing to do is probably to enable the Character Viewer tool via the Language and Text System preference pane. 


    Screen Shot 2013 04 09 at 20 19 23


    This gets you a panel like this, where you can browse all the characters your computer knows how to render, including all the emoji sets, and find out their Unicode code points, and more importantly, a way to encode that code point in UTF-8.


    Character viewer copy  


    So, as you can see in my fecal example, the UTF-8 byte sequence for 'pile of poo' ( U+1F4A9 ) is F0 9F 92 A9, and we can print that in a bash shell, using echo  with the -e flag to enable interpreting of escape sequences, using the \x escape prefix to indicate bytes in hex. 


    Going back to the original shell trick, the shell emoji (  U+1F41A ) has the UTF-8 encoding F0 9F 90 9A. The bash shell doesn't seem to have an escape sequence for hex encoded bytes in it's prompt string, but it does interpret 3 digit codes prefixed with a plain \ as octal encoded literal bytes, so if we convert this hex string to four octal numbers, using bc or od, or emacs or just Calulator.app, we get the escape sequence from my initial shell example - "\360\237\220\232"


    So far so cute. But is there anything vaguely useful you can do with this sort of thing? Sort of. A picture's worth a thousand words. So we could perhaps encode mnemonic information in icons, and somehow dynamically update the prompt to reflect this.


    Bash will execute the contents of an environment variable PROMPT_COMMAND as a shell command immediately before the shell prompt is printed. Typically this is used to update terminal colours or title strings with escape sequences, or update PS1 to add some content that can't be printed using the built-in prompt escape functions. I decided to make my prompt respond to the result of my most recent command.


    Here's the relevant shell glue I just stuck in my .bashrc 


     
    emoji ()

     export PROMPT_COMMAND='PS1=$(emoji $?)'

    This runs a shell function called emoji in a subshell, which returns a string based on the input argument. The input argument I'm using is the exit status of the last shell command. This gets me a smiley face in my shell prompt, unless the last command I ran returned a non-zero exit state, which in UNIX, indicates a problem happened. This makes my prompt draw as a 'confused smiley', if something has gone wrong.


    Screen Shot 2013 04 09 at 20 41 56


    Still cute, and almost useful!


    I think I'll keep it for a while.


     

    posted by cms on
    tagged as
  3. I'm experimenting with desktop email clients again.


    I like Apple Mail a lot, it's one of my favourite examples of GUI desktop application, but the last couple of iterations have made it a little more clumsy to use with keyboard navigation, and it doesn't scale terribly well to managing multiple, high-volume IMAP accounts. Particularly, I find refiling groups of similar emails to be more labour intensive than this task would seem to require. By means of contrast I love  refiling mail on my iPhone using Apple Mail for iOS, in truth I love using Mail on my iPhone for any mail task way more than I'd expect, it's insanely usable for an email client on a tiny, squeezable hand-toy. 


    The real impetus for investigating a desktop alternative has come from our recent switch to using GMail for our corporate mail service at work. I hate google mail's not-quite-IMAP IMAP implementation, I hate it's sluggish IMAP performance through Mail.app, and I hate hate hate  it's god-awful webmail interface. So I've been putting some thought into rethinking the way I process email. Naturally my first line of attack is to retreat to emacs .


    I've used emacs for mail before, on and off. When I first switched to using linux for my desktop systems, way  back in the 90s, I used gnus on emacs for mail for a while, then when I made the switch to XEmacs  for a couple of years I discovered VM , which was my main INBOX on and off, following me back to GNU Emacs, with occasional experiments with Netscape Navigator , and Evolution up until I switched to a Mac full-time, around 2001. I do recall trying Thunderbird a couple of times, but I could never tolerate it for much longer than a half-hour. I also used Wanderlust for emacs for a few months when I first started working at last.fm, but I switched to using a Mac at work shortly after that, and added my work email to my Apple Mail setup. 


    This time around I'm trying to re-organise the way I approach mail fundamentally. A few years ago, I started deleting mail after I'd read it, unless I definitely felt it warranted keeping. I really liked the feeling of freedom that seemed to open up, releasing me from worrying about tidy filing of hierarchical mail archives that always needed archiving and backing up. Inspired by GMail's approach to tagging and searching, the mail I did keep I filed into a small set of IMAP buckets and indexed them in Apple Mail with labels and "smart folder" searches. So I'm trying to push that even further, and I'm trialling mu , a decidedly minimalist interface to email.


    mu works over a local mail store, ideally Maildir . So I've started syncing my work GMail account to my laptop, using the mature, Free software syncing tool offlineimap ( I installed it from macports ). offlineimap has specific GMail support, and it's super-easy to set this up to sync to a GMail account, although I had to add a 


     folderfilter = 
    lambda foldername: foldername not in ['[Gmail]/All Mail']

    to the account configuration in ~/.offlineimaprc to stop it syncing the Gmail "All Mail" filter as an IMAP folder, meaning I had 2 copies of every email going down. I set up a User launch agent via launchd to run offlineimap every 5 minutes, syncing to ~/Library/OfflineIMAP/lastfm/


    Once the mail was syncing both ways, I ran 


     MAILDIR=~/Library/OfflineIMAP/lastfm/ mu index 

    to initialise the mu indexes. I can now explore the mail archive from the shell using commands like 


     mu find from:jira date:2w..today

    which would return a summary list of emails matching the search criteria (i.e. all mail sent from JIRA in the last 2 weeks). mu is based on the xapian indexer library, and these queries run lightning-quick. The indexing process is thus entirely separate from the imap sync, and the indexes need to be updated by re-executing the 'mu index' command to keep them fresh. This takes fractions of a second after the original indexes are built.


    I'm not really interested in running searches from the shell though. mu is really an archive browser ; ideal for integrating with other mail reading and sending utilities. mu ships with a nice keyboard friendly emacs interface called mu4e . mu4e offers quick navigation short cuts to browse IMAP folders, a simple syntax to mu searches, and a list of bookmarked searches, much like virtual folders. mu4e can be set to periodically update the mu index, and even run a Maildir sync, such as offlineimap. Here's the config elisp block from my startup files. 


     (setq-default
    mu4e-maildir "~/Library/OfflineIMAP/lastfm"
    mu4e-drafts-folder "/Drafts"
    mu4e-trash-folder "/Deleted Messages"
    mu4e-sent-folder "/Sent Messages"
    mu4e-refile-folder "/Archive"
    mu4e-mu-binary "/usr/local/bin/mu"
    mu4e-sent-messages-behavior 'delete
    mu4e-get-mail-command "true"
    mu4e-update-interval 300)

     all of which is quite straightforward. The root of the various folder paths is the top level Maildir. mu4e-sent-messages-behaviour is set to the symbol delete, which is recommended for GMail accounts, as GMail auto populates one of it's magical pretend folders with all sent messages. I have set mu4e-get-mail-command to true because I prefer to have the Maildir synced via my launch agent, independently from emacs.


    There's a very nice mu4e manual which documents the package in detail, I haven't managed to work through it all yet. So far I'm managing quite well with manual searches, and the default set of keybindings and stored bookmarks. List view management follows the usual emacs semantics of building up 'marks' on list entries and then applying the actions in bulk, familiar to habituated emacs users from org-mode , wanderlust, dired etc. 


    The mail and editing and sending is borrowed from the usual emacs GNUS / smtpmail combination, which is fine, as these work perfectly well.


    I've found only one tricksy wrinkle; mu4e, like any sensible thing expects email to be in plain text. If the viewer is summoned on a rich text ( usually HTML ) mail, it tries to convert it to plain text for viewing. By default is set up to use emacs built in html2text  method, which frankly sucks, and failed to convert the majority of HTML mail in my INBOX. mu4e has a configuration variable mu4e-html2text-command option to use an external conversion command. This should be a utility that accepts HTML input on stdin, and returns converted text on stdout. The manual suggests using the python-html2text utilities, but I think on a Mac it makes more sense to use the mildly obscure, but occasionally useful, Apple provided shell tool -  textutil


    It needs to be invoked like this to work with mu4e. 


      (setq mu4e-html2text-command
    "textutil -stdin -format html -convert txt -stdout")

    And with that, everything works great. I'm going to try living with it for a few weeks before I customise it further, but I'm looking forwards to setting up Wanderlust-style dynamic refiles, and integrating crypto support, so I can return to GPG encrypting and signing my mail again, like I ought to, at my age. Never forgetting, of course, cms 1st law of software :- "All mail clients suck, intrinsically"

    posted by cms on
    tagged as
  4. If you find that your Mac's 'Open With' menu is growing cluttered with identical menu entries for the same application, this indicates that your Launch Services database is confused. 


    In the normal course of action your computer scans for entries to merge into this database at boot time, and then at login for the user domains. The Finder updates it with new application information, as and when new App or Framework bundles are encountered during it's normal operation. Unfortunately this database does seem to be capable of becoming persistently corrupted, which will result in symptoms like a duplicate-riddled 'Open With' menu, or incorrect or inconsistent Filetype/Application associations. 


    On Mountain Lion, you can interact with the system database from the shell, using the lsregister utility. Run it without arguments to get basic usage instructions. It is not on any default, paths, it's buried away inside /System/Library/Frameworks/CoreServices.framework .


     /System/Library/Frameworks/CoreServices.framework

     /Versions/A/Frameworks/LaunchServices.framework

     /Versions/A/Support/lsregister -dump 

    will show you the current database in human readable form. To scrap and rebuild the database completely you might do something like this 


     /System/Library/Frameworks/CoreServices.framework

     /Versions/A/Frameworks/LaunchServices.framework

     /Versions/A/Support/lsregister -kill -all u,s,l -r -v 

    The -domain argument there is specifying that we should recursively ( -r ) scan for bundle directories in the the u ser, s ystem and l ocal domains (i.e. "~/ /" , "/System/ ", and "/ " ) and register their document type bindings and other information with the Launch Services agents, which will update their database with this information. The -v switch turns on progress logging, which is all done to stderr.


    If you're in the habit of installing apps or library bundles to some alternative roots than the builtin domain types, you can add those paths to the command, instead of the domain flags.

    posted by cms on
    tagged as
  5. A-list iOS developer shop Tapbots today released a remix of their excellent twitter client ( Tweetbot ), focused on tiny pay-subscription social network platform app.net . I think Tweetbot is probably my favourite thing about my  iPhone, and so I immediately purchased it. No obvious disappointments, all the slick performance I like is there, and it brings across some features I've been lacking in ADN for a while, like the ability to swiftly upload photos. I promptly celebrated by taking photos of every last.fm staff member with an ADN I could track down . I think this will probably increase my use of ADN moderately. Mobile is an essential component of gathering the off-the-cuff asynchronous status updates a service like this is built upon.


    I'm not sure that it will gigantically increase my engagement with ADN alpha. I was a bit suspicious of all the frothy cliques, with an intangible unease that I struggled to define, at least until I suddenly realised it was a cogent reminder of the very earliest days of bootstrapping the IMDb message boards . That left me feeling more comfortable with what the thing was, but no more inspired to engage. I'm still in love with the idea and the ideals of the place, and I'm reasonably confident it hasn't yet fallen into it's proper, more useful place. I'm shallow enough to enjoy my sexy low user id on some level that even I don't properly understand.


    Has App Dot Net "arrived?". I think not yet. Netbot feels like a threshold event of some kind, in as much as serious developers are prepared to put enough effort into the ADN platform to produce fully realised software harnessed to it, and this degree of finish does not come cheap. ADN seems to be on a little draught of second wind recently, there's been a couple of fun toy apps, some positive press, and the recent price drop, bringing a wave of fresh users in. I'm still very positive about ADN as a concept, an indicator that there's now a long tail of internet folk interested enough in paying for stuff to make services like this potentially viable. I won't be really  excited about ADN until I see the first compelling application built over it that is some mostly new and useful thing, rather than a new skin on an old one.

    posted by cms on
    tagged as
  6. If you've ever tried to take over somebody else's detatched screen sessions, by using the su command to assume their login identity, you've probably seen an error message something like


     Cannot open your terminal device /dev/pts/3

    This is because your pseudo terminal device is allocated when you login to the session, and remains owned by the user id you logged in, after you've changed your effective uid by su -ing. 


    You can try and kludge your way around it by chmod -ing your pty device file to make it more arbitrarily readable, but that's ugly and stupid, and needs escalated privileges. A slightly smarter way to work around this is to force a new pseudo terminal for the assumed login session. A really simple way to do this that I've recently discovered is to use the script  utility. script is a useful tool intended to preserve a transcription of an interactive terminal session.   To do this, it creates a new pty device for the current user id. So you can use it to help you recover a detatched screen by typing this


     su - someuser

     script /dev/null

     screen -r somesession

    Passing /dev/null to script just means that the transcript is discarded.


     

    posted by cms on
    tagged as
  7. One thing I wasn't expecting, from last month's new Apple hardware announcements, was the new MagSafe 2 power connector . The new Retina MacBook Pro, along with the 2012 " Ivy Bridge " MacBook Airs, have a new MagSafe port, physically incompatible with the previous generation, unless you use a little adaptor widget , which was luckily introduced for sale on the very same day.  


    MagSafe is Apple's name for their clever system of attaching the power line to their laptops to charge. Some say too clever by half. The cable has two pins, arranged as two symmetrical pairs , so you don't need to worry about orientation when you connect it up. The pins live in a little oblong recess, surrounded by a thicker shiny metal lip, which is magnetized. The power socket has the complementary inverse shape and magnet, meaning that they eagerly cup together to form a snug charging connection when introduced. The other significant benefit of this arrangement is the ease of disconnection, nice in itself, with the additional blessing that if some clumsy person, perhaps a passing dalmatian , blunders through your cable while you're tethered to the mains, your computer doesn't fly from the desk and shatter, the magnet just snaps free. I'm a big fan.


    And so, on to MagSafe 2. Essentially it's the same thing, but in a different shape. The pin configuration and spacing seems to be the same, but the magnetic lozenge, and the companion socket have been reshaped to be longer in the lateral plane, and slightly shorter in height. The shape of the connecting plug has returned a the symmetrical rectangular nub, with embedded charge indicator. Reminiscent of the first generations of MagSafe, but Aluminium, rather than white plastic, and slightly longer, making it perhaps a bit more finger friendly. 


    Most commentary I've seen about this form change has settled on the Retina MacBook Pro as the motivation for this change, speculating that the move to thinner unibody laptops requires a thinner connector. I'm pretty unconvinced by that argument. The MagSafe 2 is only a millimetre or so thinner than the previous design. I think that if your design constraint was to shrink the connector, you could make it smaller. Furthermore, the traditional Magsafe port is almost the same height as a USB or HDMI socket, and the Retina laptop case houses these ports, without compromise. I have a different theory about the reasoning behind this new shape.


    I think the most significant change is that the contact area of the magnetic surface has now nearly doubled. It's a lot more grippy than it's ancestor. Anecdotally, over the lifespan of the MagSafe, I've heard complaints from other users about the reliability of the chargers, particularly about cable and connector failure. Having never experienced similar problems with the half-dozen plus MagSafe chargers I've owned, I've puzzled about this. I wonder how many people might be disconnecting their chargers by yanking on the cable. This works as a method of disconnection, but it's not a very sensible approach, it puts a lot of mechanical stress on the junction between the cable and the plug. Do it enough, and you'll eventually break it. The magnetic coupling is most efficient in the horizontal plane. What you ought to do is flick the plug out, by hooking a finger underneath the connector plug, and angling it up away from the socket.


    Apple certainly seemed to recognise that there was a UI problem here. Perhaps an expensive one, if enough customers were returning broken chargers to stores. They even produced a technote about the correct way to disconnect a MagSafe. Then MagSafe plug connectors changed shape over time . The strain relief on the cable junction lengthened, and then the plug changed from the original stubby T-shape, to a longer L-shape, itself subsequently re-inforced with additional strain relief. This connector shape encourages a lower-stress detatchment, but spoils the nice symmetrical property of the plug, because you can now connect it facing forwards, where it will obscure your other ports. MagSafe 2 returns this helpful feature.


    So is reliability a plausible motive for this redesign? I think so. The increased contact area of the magnet in MagSafe 2 makes it quite a bit harder to disconnect by cable-tugging. The larger plug housing is easier to grip with the fingers and angle out. The connector is a sufficiently different shape to visibly distinguish it from it's predecessor. It will be interesting to see if the reliability reports from users  improve. 

    posted by cms on
    tagged as
  8. It seems like I've been waiting all my computing life for VDUs to exceed 200 DPI . Well, that's an exaggeration. I've been waiting for it for about as long as I was first exposed to system-wide vector-based  type rendering, in the late 1980s. So I'm understandably excited about Apple's new "retina" MacBook Pro , with it's display of ~220 DPI.


    Why care so much about DPI? It's all about the text, in particular the inherent problems with clearly scaling non-rectilinear strokes.  Text is the fundamental component of everything I do with computers. It always has been, and it seems likely that it will long continue to be. As a floppy haired, slack-wristed aesthete, I really care that the text, which I will be staring into for hours, is clear and beautiful.


     The LCD screens used for most modern displays are constructed from a mesh of tiny discrete transparent shutters , which work in combination to make up pixels, which are the smallest visual element that can be addressed on a bitmap display. Typically these pixels are nearly square, and they are arranged in a 2D matrix of perhaps a few million elements. That may sound like a lot, but it's coarse enough to introduce perceptible distortion into lines that are not perfectly rectilinear.


    One of my favourite things about Mac OS X, and it's upstart little brother, iOS, is the respect their type-generating software applies to letterform. Typefaces render very faithfully, regardless of scale, and pains are taken to smooth out the curves, using anti-aliasing techniques, that detect the staircasing edges of lines, and soften them into their background with gradual shading. This works very well, but it's not un-noticeable; there's a soft-focus effect that gives a fringey halo to certain text shapes; you become inured to it over time. Other GUI systems tend to adjust the letterform to make the text better align to the pixel grid, it's common for people who aren't habituated to the Mac to comment about the degree of blur.


    Things are much better than they used to be. Way back in the day, when outline curved rendering was just too computationally expensive to be routine, everything on-screen was painted as a copy of a pre-drawn bitmap , and blocky graphics were everywhere, particularly once scaling and translation was applied. We peered at them on our tiny goldfish-bowl CRT monitors. Outline font rendering was a specialist feature of certain software packages or dedicated computer systems, perhaps not even rendered online. The fanciest workstation computers had gigantic 20" CRTs , and all vector graphical engines like Display PostScript . It seemed reasonable then to expect the exponential improvements in technology to scale this up to at least print-quality DPI, and the costs to come down.


    The costs did come down, and the computers continued their frantic pace of improvement, but something appeared to lock mainstream display rendering at somewhere around 100 DPI for over a decade.  I think it was a combination of factors.


    There was the move away bulky from beam scanning phosphor dot CRT monitors, which are theoretically capable of precise drawing at a perfectly graduated range of resolutions, over to the more space and power efficient LCD displays, with the aforementioned discrete physical pixel elements. Fifteen years ago I had a 19" ADI multisync CRT monitor, and the effective resolution of my computer display crept up as I upgraded my graphics card and display, and the monitor kept  pace. For the last ten years, I've been using a nice 23" HP widescreen LCD , and my desktop resolution has been locked at  1920x1200 that corresponds to the mechanical pixel array of my screen.


    LCD screen technology manufacturing is closely tied to flatscreen television production, where the standard vertical resolution has settled on 1080 pixels, which is marketed as ' High Definition ' which is actually pretty low definition if you stop to think that cheap desktop computers were routinely rendering higher than that years before its roll-out.


    The system software used on desktop computers, made optimisations and took short-cuts based on the average dot pitch, using fixed bitmaps for painting GUI elements, making assumptions about proportions and spacing of on-screen elements that entrenched and subsequently proved remarkably hard to shift.


    The turning point seems to have come with the iPhone 4 , and it's "Retina" display, with a DPI count of 326 - close to that of low-grade print - on it's highly saturated backlit LCD screen. Text looks fantastic on this generation of iPhone, still to me the nicest display of this type I've seen. This was followed up by the slightly coarser (264 DPI)  Retina iPad model a couple of years later, with a and as of last week, the still slightly astonishing Retina MacBook Pro.  Seems like the high DPI era I've been waiting for is here!


    And yet I'm not going to buy a Retina MacBook Pro. I did give it some excited thought. I rushed right out to Apple Covent Garden after the announcement, and fondled one for a little bit, and decided it's not really for me. Experience has taught me to steer wide of a 1st iteration Mac Platform, especially one where Apple seems to be pushing the hardware design into some advanced new shape. There's often early adopter trouble. A couple of early warning signals jump out at me from the start. Pushing that many pixels around is really going to need some grunt work. I have my suspicions about cooling; why the big air vents down the side, why devote five minutes of the keynote describing a cunning new fan design? It's a Mac, I want no fans. Steve always wanted No Fans . It's too big and heavy for me, and yes of course, it's really expensive.


    I ordered a new generation 13" MacBook Air . It will replace my current laptop, a last generation 13" MacBook Air. Which replaced my previous laptop, a 13" MacBook Air from the year before. Seems I have a MacBook Air habit .


    The wedge-shaped MacBook Air is iterating rapidly to converge upon my ideal computer. Light enough to move around without becoming a burden. A full scale keyboard that I enjoy typing upon, as an emacs -wedded touch typist prone to RSI. Enough pixels on the screen to productively juggle the magical 3 window pattern I tend to adopt for work (an editing window, a reference window, and a command shell). Enough power that I don't need to worry about where my next charge point is. And the 13" display has fairly small pixels (~128 DPI). Smaller text isn't as legible as I'd like, mind you, and some of the GUI elements are a bit small. It would be nice to have more CPU cores. Like I say, iterating rapidly...


    200+ DPI displays are clearly here to stay. Where Apple plant their flag, all the OEM PC hardware makers ineveitably follow. Microsoft Windows , which to me increasingly looks like it's playing catch-up, seems to me, looking from the outside, to be more completely resolution independent than either of Apple's operating systems at this point in time, so that shouldn't be a hold-up to broader deployment any more. Production will simplify. Costs will fall with scale. 


    I had been planning on buying a nice external display, probably an Apple Thunderbolt , because they make lovely docking stations for Thunderbolt-equipped laptops, but that's a foolish idea now. It seems sensible to bet that there will be a high-DPI equivalent along within a couple of years, and monitors are a long term investment. I can wait. 


    We seem to be at something of a transitional phase for the personal computer at the moment. It seems likely that the future of the Mac is some kind of convergence point between the iPad, the retina MacBook Pro and the MacBook Air, but I can't quite figure out what shape that thing will take. I am typing this final sentence on my box-fresh, just powered up, 2012 MacBook Air, with it's new Mac smell, and it's LCD screen cleaner than I will ever be able to polish it; already I am day-dreaming about it's replacement.

    posted by cms on
    tagged as
  9. tee hee hee

    elfm.el is a rudimentary last.fm radio client implemented within emacs lisp. I wrote this at work to present at our internal "Radio Hackday"; dedicated to encouraging staff to experiment with the radio services and API , and make something with them in a day and a half for show-and-tell. Kind of 20% time distilled right down to an essence.


    I wasn't sure if I was going to have enough time to contribute anything, so I wanted to focus on something I could hack on by myself, because I didn't want to hold a team back if I got called away. So I picked something jokey, inessential, yet hopefully thought-provoking, as per my usual idiom.


    I had a real blast participating. I don't usually get time to attend things like proper hack days, being all old and family-bound. I really enjoyed the atmosphere of inspiration and industry. All the other hacks were amazing, and waiting for my turn to demo I felt quite embarrassed about my stupid cryptic toy, but it worked perfectly in the spotlight. I got almost all the laughs, and all of the bemusement I was aiming for.


    The code is here . It is awful. I haven't written any coherent lisp on this scale for many years. It uses too many global variables and special buffers. It doesn't scrobble. I had to rewrite all my planned asychronous network event machine halfway through implementation, when I re-discovered the lack of lexical closures in elisp. ( I've been reading too many common lisp books in the interim, I suspect ). I think there's enough of the germ of a useful idea in there that I might just clean it up and try and extend it into a proper thing.


    I built and run it using GNU Emacs 23.4.1 . I used an external library for HTTP POST , which I found on emacswiki ( HTTP GET I glued together using the built in URL libraries). I've also put a copy of the version I used in the distribution directory. I used mpg123 for mp3 playback, which I installed using Mac Ports . The path to mpg123 is hardcoded in the lisp somewhere, probably inside play-playlist-mpg123.


    Here's my demo script, which I evaluated in a scratch buffer. Evaluating these forms in sequence will authorise the application, tune in the radio, and then fetch a playlist of five tracks and start playing them.


     ;;;; -----DEMO , this example code is out of date, see README 

     ; will open a browser to authorise application

     (authenticate-app) 

     ; authenticate a user session

     (start-user-session) 

     ; tune the radio to this URL

     (radio-tune "lastfm://user/colins/library/") 

     ; refresh the playlist 

     (get-request (get-playlist-url)) 

     ; filter the playlist response to sexps, play the list

     (play-playlist-mpg123 (reduce-playlist)) 

    There is only one playback control at the moment; stop, which you can manage by killing the buffer lastfm-radio which has the playback process attached to it.  You can retune the radio with any lastfm:// URL format ,  by re-evaluating radio-tune, and then refreshing and playing the playlist i.e. repeating the last three steps in sequence.


     The internal hackday was a cracking idea. Most of the hacks were focused around radio enhancements with broad-ranging appeal, the vast majority of them looked practically useful. I suspect most of the work will filter out into site and product updates. In addition to this, and perhaps more valuably, it worked really well as a community exercise, evolving knowledge-sharing, cross-team working, and enthusiasm, and converting them into inspiration, craft, and art. More of this sort of thing, everywhere!


    Updated



    I've iterated on the original hack quite a lot to make it slightly less brain-damaged, and a bit cleaner to import into anyone else's emacs. Updated code is here and so is a README file with updated running instructions. It's still not really in a usable state for anyone else, but it's amusing me to fiddle with it, and I vaguely plan to get it to a releasable alpha state, at which point I will publish a repository.

    posted by cms on
    tagged as
  10. My friend Jim won 15 quid by solving the New Scientist Enigma Puzzle. The really neat thing is he did it 32 years after the fact. Read all about it here , in his own words.


    Would anybody with a working BBC like to contribute a real world run time for his BBC BASIC based solution?


    Jim runs the Enigmatic Code blog about his hobby of solving New Scientist's Enigma puzzles using short python programs, which anyone can play along with at home.

    posted by cms on
    tagged as
  11. I was churlishly unimpressed by the iTunes "12 days" Christmas promotion this year. However whilst subsequently browsing the iTunes Store home page I did find one app that impressed me enough to blog about.


    There's a store section called " Apps Starter Kit " which lists a dozen or so applications that Apple are promoting as "must have" installs for new iOS users. I installed a handful of these to my iPhone 3GS, but the one that has most impressed me so far is the iOS edition of DragonDictate .


    It's a "split brain" app, by which I mean it uses "the cloud" to perform the text-to-speech conversion. So far I have been quite impressed with the accuracy of the process, in fact I have created this blog post by dictating while walking the dog, with just a little editing afterwards for tidy up and to add hyperlinks. I suppose it is a little like a poor man's edition of Siri, minus the pretend A.I. and the search and reminders integration.


    You can get text by dictating into a text box within the application and there is a quick menu of options that allow you to create an SMS or an e-mail or copy the text to the system clipboard easily for use in other applications. This collaboration isn't too clunky and although dictating text into your phone is a little stilted it doesn't seem to be significantly less effective than my relatively crappy typing on the iPhone on-screen keyboard.


    The app was free, presumably it's intended as a promotional device to introduce users to the Dragon family of software applications. Obviously there are some privacy concerns raised by having the voice processing performed on a remote server, but the terms and conditions include a privacy policy which guarantees to preserve your anonymity and keep your data private. The application did even prompted me to ask if I wanted all of my contact names uploaded to the remote service for greater the use of name recognition, and took pains to explain that this would only include name fields from my contacts database and no other personally identifying information or contact details.


    I am not sure I would make a habit of using it for writing long articles or even blog posts like this but I think it could prove to be quite useful for such purposes as short e-mail replies or even sending SMS messages in situations where it's inconvenient to type.


     

    posted by cms on
    tagged as
  12. Hello there, old friend #movingin

    Of course, I bought and read the Jobsography , Kindle edition, naturally. While I'm not sure I identify with all the howling fanboys' anguished reviews, given my role as super-NEXTSTEP-fanboy  I was a bit disappointed, although not particularly surprised, at the relative lack of NeXT content. So I was overjoyed when this 1986 PBS documentary , featuring NeXT in it's pre-launch startup guise, popped up in it's wake. The linked blog post also contains the NeXT stevenote, from the eventual product launch.

    posted by cms on
    tagged as
  13. The perfect laptop at last

    Of course it's not actually running NEXTSTEP. Of course, in a sense it is. Just like your phone.


    Thanks to ebay. I like the fact that the sticker arrived with a little template indicating the correct 28° of jaunt. I ignored it of course, and just lined it up by eye.

    posted by cms on
    tagged as