Walker Sampson

walker [dot] sampson [at] icloud [dot] com

Category: hardware

KryoFlux Webinar Up

In February, I took part in the first Advanced Topics webinar for the BitCurator Consortium, centered on using the KryoFlux in an archival workflow. My co-participants, Farrell at Duke University and Dorothy Waugh at Emory University both contributed wonderful insights into the how and why of using the floppy disk controller for investigation, capture and processing. Many thanks to Cal Lee and Kam Woods for their contributions, and Sam Meister for his help in getting this all together.

If you are interested in using the KryoFlux (or do so already) I recommend checking the webinar out, if only to see how other folks are using the board and the software.

An addendum to the webinar for setting up in Linux

If you are trying to set up KryoFlux in a Linux installation (e.g. BitCurator), take a close look at the instructions found in README.linux text file located in the top directory of the package downloaded from KryoFlux site. It contains instructions on dependencies needed and the process for allowing access to floppy devices through KryoFlux for a non-root user (such as bcadmin). This setup that will avoid many permissions problems down the line as you will not be forced to use the device as root, and I have found it critical to correctly setting up the software in Linux.

Goodbye Goodwill Computer Museum, Hello Museum of Computer Culture

Museum of Computer Culture

I’d like to call attention to a big change for the Goodwill Computer Museum, where I volunteered in Austin, Texas, and worked with many incredibly smart, fun people like Russ Corley, Virginia Luehrsen, Stephen Pipkin, Austin Roche, Phil Ryals and lots others.

The big change is that, because of organizational and aspirational differences between Goodwill and the museum, the museum was taken out back and shot.

No, I kid. But the aforementioned team has amicably parted ways and is now reborn as the Museum of Computer Culture.

The mission remains broadly the same:

We are an Austin, Texas nonprofit organization seeking to inspire and educate the public with engaging exhibits on the evolution of computer history and its influence on our common cultural experience, develop and support digital archival studies through services to universities and other institutions, and conserve computer history information through digital preservation.

I wish them, and Austin Goodwill Computerworks from which the team got its start, the best of luck!

Phil has already posted a thorough, technical and first-hand account of his work as a technician for the Autodin, the Department of Defense’s first computerized message switching system.

Book Review: Racing the Beam [re-post]

A re-post from the Preserving Games blog, February 12, 2010.

Montfort, N., & Bogost, I. (2009). Racing the Beam: The Atari Video Computer System. Platform Studies. Cambridge, Massachusetts: MIT Press.

Racing the Beam: The Atari Video Computer System

Racing the Beam: The Atari Video Computer System

Just want to give a brief rundown on a really great read I’ve come across. MIT has started a “Platform Studies” series of books where the idea is to examine a platform and its technologies to understand how this informs creative work done on the platform. Platforms could range from gaming consoles, to a programming language, to an operating system, or even the Web itself if this is the platform upon which creative work is being made. The platform in this case is the Atari Video Computer System, the first Atari home system, later referred to as the Atari 2600 in the wake of the newer Atari 5200.

The authors examine the Atari VCS as a computing system, and take care to elaborate the unique (really exceptionally odd) constraints found there. Six games are investigated in chronological order, giving the reader a sense of the programming community’s advancing skill and knowledge of the system: Combat (1977), Adventure (1980), Yar’s Revenge (1981), Pac-Man (1982), Pitfall! (1982), and Star Wars: The Empire Strikes Back (1982).

The most prominent technical details are explained in first few chapters, and they illuminate each game’s construction as an exceptional act of engineering and ingenuity. Just to give an idea of the unique affordances of the Atari VCS, here are a few of the most characteristic details:

  • The custom sound and graphics chip, the Television Interface Adapter (TIA), is specifically designed to work with a TV’s CRT ray. The ray itself sprays the electrons onto the inside of a TV screen, left to right, one horizontal scan line at a time, taking a brief break at the end of each line (a “horizontal blank”) and a longer break at the bottom line, before resetting to the top and starting over again (a “vertical blank”). A programmer only has those tiny breaks to send any instructions to the TIA, and really only the vertical break provided enough time to send any game logic to the system.
  • It was imperative that game logic be sent at these breaks because the Atari VCS had no room for a video buffer. This meant there was no way to store an image of the next frame of the game, all graphic instructions are written in real time (sound instructions had to be dropped in on one of the breaks). A designer or programmer could choose to restrict the visual field of the game in exchange for more time to send game logic instructions. Pitfall! is an example of this.
  • This means there are no pixels on the Atari VCS. Pixels require horizontal and vertical planes, but for the Atari VCS, there is only horizontal scan lines. There is no logical vertical division at all for the computational system. As the beam goes across the screen, a programmer can send a signal to one of the TIA’s register to change the color. Thus, the “pixels” are really a measure of time (the clock counts of the processor) and not space.
  • Sprites, such as they existed for the Atari VCS, were hard-coded into the ROM of the system. Programmers had five: two player sprites, two missiles, and one ball. Reworking that setup (clearly designed for Pong and the like) into something like Adventure, Pitfall!, or even the Pac-Man port is an amazing feet.

The book doesn’t refrain from the technical. I could have used even more elaboration than what is presented in the book, but after a certain point the book would turn into an academic or technical tome (not that there’s anything wrong with that), so I appreciate the fine line walked here. The authors succeed at illuminating technical constraints enough for the general reader to understand the quality of the engineering solutions being described. Moreover, the authors leave room to discuss the cultural significance of the platform, and to reflect on how the mechanics and aesthetics of these Atari titles have informed genres and gameplay presently.

Making the Water Move: Techno-Historic Limits in the Game Aesthetics of Myst and Doom [re-post]

A re-post from the Preserving Games blog, January 24, 2010.

Hutchison, A. (2008). Making the Water Move: Techno-Historic Limits in the Game Aesthetics of Myst and Doom. Game Studies, 8(1). Retrieved from http://gamestudies.org/0801/articles/hutch

 

This 2008 Games Studies article examines the effect technology (or the “techno-historic” context of a game work) has on game aesthetics. The author defines the “game aesthetics” as “the combination of the audio-visual rendering aspects and gameplay and narrative/fictional aspects of a game experience.” It is important to note that audio-visual aspects are included in this definition along with the narrative/fictional components. This is because the author later argues that advancing audio-visual technology will play an important role in advancing the narrative aspect of games.

The article begins with a comparison of two iconic computer games of the mid 1990s: Myst and Doom. Specifically the design response in each game to the technological limitations of PCs at the time is examined. Very briefly, we see that Myst takes the “slow and high road” to rendering and first-person immersion, while Doom adopts the “fast and low road.” As the author explains, each response was prompted by the limitations of rendering that a personal computer could perform at the time. For its part, Myst’s design chooses to simply skip actual present-time 3D rendering and use only pre-rendered, impeccably crafted (at the time) images to move the player through the world. Minor exceptions exist when Quicktime video is cleverly overlaid onto these images to animate a butterfly, bug, moving wheel, etc. This overall effect very much informs the game’s aesthetic, as anyone who played the original can recall. Myst is a quiet, still, contemplative and mysterious world. Continuous and looping sound is crucial to the identity of the world and the player’s immersion. Nearly every visual element is important and serves a purpose. The designers could not afford to draw scenes extraneous to the gameplay. The player’s observation of the scenes available is key, and the player can generally be assured that all elements in the Myst world warrant some kind of attention. Hardware limitations of the time, such as the slow read time of most CD-ROM drives, serve to reinforce this slow, methodical gameplay and visual aesthetic.

Doom by contrast uses realtime rendering at the expense of visual nuance and detail. Doom achieves immersion through visceral and immediate responsiveness, and its aesthetic is one of quick action and relentlessly urgency. The low resolution of the art and characters is compensated by the quick passing of those textures and objects, and by the near-constant survival crisis at hand. Redundancy of visual elements and spaces is not an issue: the player can face down hordes of identical opponents in similar spaces (sometimes the exact same space) and not mind at all because the dynamism of the gameplay is engaging enough to allow such repetition. Pac-Man had the same strength.

From this comparison the author goes on to speculate how techno-historic limitations inform aesthetics in general, and whether the increasing capacity of personal computers to render audio-visual components in extreme and realtime detail will inform the narrative/fictional aspects of games as well. One only needs a passing familiarity with games to know that this aspect of games has been widely disparaged in the media and in some academic writing. Some quotes the author uses to characterize the degenerative trend of popular media and the game industry’s complicity in the coming intellectual apocalypse:

Perhaps lending strength to this phenomenon is a current popular culture stylistic trend which emphasises “spectacle” over narrative and gameplay. Peter Lunenfeld has identified this broad movement in popular culture generally:

Our culture has evacuated narrative from large swaths of mass media. Pornography, video games, and the dominant effects-driven, high concept Hollywood spectaculars are all essentially narrative-free: a succession of money shots, twitch reflex action, and visceral thrills strung together in time without ever being unified by classic story structure (Lunenfeld, 2000, p.141).

And more specifically dealing with games:

“It is a paradox that, despite the lavish and quite expensive graphics of these productions, the player’s creative options are still as primitive as they were in 1976” (Aarseth, 1997, p.103).

Most interesting is the observation that richer media capabilities does not necessarily translate to glossier, superficial renderings. Richer media can mean a more meaningful experience for the player. Nuance and subtlety can be introduced, more information-rich media can mean more powerfully conveyed characters and a more fully realized narrative.

On top of this, one can expand the definition of “story” and “narrative” as id developer Tom Willits argues in this Gamasutra report:

“If you wrote about your feelings, about your excitement, the excitement you felt when new areas were uncovered [in Doom] — if you wrote it well, it would be a great story,” Willits says. “People call it a ‘bad story,’ because the paper story is only one part of the game narrative — and people focus on the paper story too much when they talk about the story of a game.”

Information, he maintains, is learned through experiences, and the experience of playing a game is what forms a narrative, by its nature. Delivering a story through the game experience is the “cornerstone” of id Software’s game design, and the key when developing new technology.

Whatever your opinion on what constitutes story and narrative in media, the author of this piece has made a compelling argument that advancing technical capabilities could directly inform the narrative/fictional aspect of a game’s aesthetics, and certainly has done so in the past.

Modeling Computers on Omeka

Draft of an Omeka Display for Computer Hardware

Draft of an Omeka Display for Computer Hardware

This week I’ve been working with Omeka a good deal, experimenting with an approach to modeling and documenting a computer system through it.

I see two “issues” presently. The first is what metadata and what documentation needs to be provided for MITH’s purposes, the second is how to present and organize all this information.

It seems desirable to try and model hardware and computing systems through component pieces and parts. This allows one to describe the specific locations of a certain types of firmware and software, and it allows parcelling out documentation to a flexible level of granularity or generality depending upon the item being described (e.g. a motherboard, a ROM chip, a connector, a floppy, or a computer system).

For example Apple IIe systems either have Apple DOS or ProDOS on them. Specifically, that software is located on the PROM chip of the Disk II controller card, and it operates with Applesoft II BASIC, located on a ROM chip on the motherboard. What appears as a fluid interface on the screen is really two pieces of software in two different places, and each has a distinct history and properties. Articulating this distinction seems especially appropriate for organizations that will be using their systems for research and media access, and would like to assess the details of a machine or of media at a glance.

There isn’t an organization I’m aware of that is doing this for its audience or users at this point. It seems typical to document extensively at the level of the computer system. That is intuitive, especially in the timeframe that saw so many vertically-integrated personal computers (Commodores, Apples, IBMs) but the march of PC clones complicates that approach.

In another light one can see this as the popular ideal of the computer. For example, the iMac: it looks like it doesn’t have any parts, like it sprung from the forehead of Jobs, fully formed and completely capable. And it is pleasing to the eye.

Anyways, it’s been really edifying to do this research. Omeka’s API has been pretty capable for this sort of task too.