In response to “You folks can help me out by telling me — were you exposed to any similar material in your artscene travels or was this just something that only happened in Vancouver?” as related to the article “Chinese ANSI art from the 604 — the missing link between the artscene and Big 5?” by Rowan Lipkovits.

It definitely existed in parts of California as well. Albeit, California has a modest population of individuals with Japanese ancestry, and personally speaking I sought such things out more than many, I began learning Japanese at a relatively young age and had tutelage from several friends in preschool, elementary school and even one instructor who taught Japanese at the Defense Language Institute in Monterey, before I sought out more formal instruction in such subjects as a teenager. Several of the BBSes I would frequent had Japanese animation subsections, and occasionally art like this would surface.

The reality is that due to the ASCII centric nature of computing in the 1980s and even much of it today, there was less outside of ANSI or ASCII to be found. In the 1980s, most computers from Japan in that era used JIS, Shift-JIS or EUC as their character encoding standards which were rarely, if ever supported in Terminal software in the USA. Browsers have improved this a bit, thanks in large part to the development of UTF-8 which includes Unicode, emoji and a wide variety of human language orthographic character sets, even if it still, for the time being, omits fictional orthographic systems such as Klingon and Tengwar. For readers who are Star Trek and Tolkien enthusiasts, be heartened to know that there are already antialiased fonts for such things, and even associations for characters in Unicode tables, they just aren’t implemented officially at the time of this writing as far as I know. The important thing being, that Unicode exists to represent extant used written human languages many of which have been used for thousands of years, long before ASCII and ANSI and EUC and JIS and Shift-JIS and UTF-8 and UTF-16 and such even existed, not simply invented ones from popular culture. It seems important to account for existing human written orthographic systems, before accounting for the ones which have been invented more recently for the sake of book or movie or television show franchises. That said, if you are into such things, I would love to see the Zentradi alphabet from マクロス 「Macross」as well as the alphabet from 二ノ国 「Ni no Kuni」see some more widespread implementation. I have even been able to successfully use a bitmapped font from the Indie game Hyper Light Drifter earlier this year, so I know such things can be done! ^_^

However, even today, some Operating Systems are very constrained for representing non-ASCII characters. I primarily use a system running OS X these days, which has excellent Unicode support, but I have one laptop running the *Japanese* version of Windows 10, and running cmd.exe or powershell.exe fails to provide for UTF-8 character rendering, meaning that even simple test scripts I have written, will not render characters correctly from a Windows 10 command line, e.g. http://gaps.artkiver.com/grey/files/108per24hours.sh (a bash script with some sanskrit/devangari in UTF-8 encoding) http://gaps.artkiver.com/grey/files/380commented.go

Both of those are basically the same script, but the second one is written in Google’s golang instead, which can be compiled and run, even on Windows, but when executed, it displays only ▓ like square characters. Even the Japanese operating system: DOS/V had more robust non-ASCII character set support on the CLI in 1990, than the Japanese version of Windows 10 demonstrates in 2016.

Both of those work fine on OS X, even a seven year old version of Leopard on a G4 PPC based laptop (maybe older versions of OS X too, but I do not have anything older to test with at the moment). They also run and render characters without issue on some F/OSS operating systems such as the BSDs as well as Linux distributions AFAIK though I have not done exhaustive cross platform testing. ^_^

As a programmer who is a career network and system administrator and having a University degree in Language Studies with a focus on Japanese, I find that having operating systems and programming languages with ASCII constrained character encoding is vastly limiting, or results in a lot of effort and IMNHO wasted time on the programmer end to try to render such things using the blocking ANSI and ASCII art styles. To quote Rob Pike in his paper Systems Systems Software Research is Irrelevant:

“Narrowness of experience leads to narrowness of imagination.”

For context, Rob Pike worked for Bell Labs on the Blit, a graphical terminal for Unix workstations, as well as on their research operating system plan9 and helped to draft the original UTF-8 standard.

I found that personally speaking, even when doing custom ANSI animations for an Amiga CNet BBS around 1994, often IBM PC users not running the same terminal software would complain, or even express fears that I was trying to hack them (talk about living in fear and misplacing anger!).

In actual fact, I primarily used Terminus on an Amiga up until the mid 1990s, and with that level of computer, that terminal software, and CNet’s MCI programming language, it was possible to do modestly sophisticated ANSI animations, even with text rendering on screen from right to left and beep code audio accompaniment.

I looked at one of the programming reference manuals for MCI not too long ago, and it was basically glorified assembly. But then, so was the Hayes AT Command set, just that the registers for the Hayes AT Command set are in reference to registers in the modem, rather than registers in a CPU. Such things, to me, despite being around 16–18 at the time, seemed like normal programmatic modalities encountered all over the place, ever since I realized that BASIC was too slow to do much interesting with on an Apple ][ or C64. Thankfully, many of the users who had modems and who were using BBSes in the 1980s and early 1990s, were also experienced programmers, so that didn’t seem like an unreasonable learning curve to expect. Personally speaking, I began formal instruction in programming at the age of six though I have since interacted with at least one Fairlight member who apparently began programming at age four. Their code is admittedly, pretty impressive most of the time as to be expected.

By the mid 1990s with the increasing popularity of the WWW, and the increased proliferation of the internet, which had previously existed for decades, primarily in research, education and military fields, there were even more users of computers, but far fewer sophisticated programmers or users near as I can discern. Some from that era refer to AOL coming online as the “Eternal September” in alignment with this, and now with twitter and similar banal snapchat/instagram/kik/etc. type services, discourse appears to have been diminished even further.

Folks who code in assembly on any level seem relegated to older users, or perhaps demo scene enthusiasts. By comparison, I am of the opinion that if you are writing in a language that is C or higher, you may be missing out, and many today program in Ruby on Rails, or Java or python, or some other language which is *implemented* in C. For them, assembly probably seems impossibly esoteric, when in actual fact it is more akin to basic arithmetic like addition and multiplication, when compared to say, calculus. ;-/ It doesn’t need to be that way, but I admit it probably helped that I began programming on slower more constrained systems to get an idea of where low level abstractions were preferable to higher level language abstractions and potential code bloat. Such disparities were much more evident with CPUs running at 1Mhz such as the classic MOS6502 or 7Mhz as the venerable MC68000.

Even the CNet MCI ANSI animations I explored, given that they were constrained to ANSI character sets, felt very limiting personally. Particularly since such things did not render properly without better terminal software, and probably an Amiga at the time, back before CBM declared bankruptcy in 1994. As such it seemed often more sensible for me to forgo attempts at ANSI animations altogether when coding up a demo would be just as constrained as far as what computers could view it well, but allowed for much more impressive programming tricks, better audio and so on. One of the more popular demos on the Amiga from the early 1990s which features asian orthographic characters was Spaceballs’ 9 Fingers. This full motion video with audio, fits on two 880KiB floppy disks and was traded freely throughout the Amiga scene. Here is a 50FPS recording of it, but doubtlessly streaming this via youtube will consume far more bandwidth than it would to run it on an original Amiga:

https://www.youtube.com/watch?v=n4M7e79XTYk

(If, on the other hand you have an Amiga, or an emulator such as FS-UAE you can find links to the disk images here: http://www.pouet.net/prod.php?which=100 )

I explored trying to approximate Japanese characters with the ANSI character set, but most of those experiments I considered complete failures. UTF-8 is “the way” forward in the 21st century especially if you want correct character rendering across a wide range of operating systems, and basically every browser. I find it sad that some of the command line offerings in current operating systems are more limited, thankfully: not all of them! ^^

Of newer “high level” programming languages, I have a soft spot for Google’s golang, because even it’s “hello world” on https://golang.org is “Hello 世界.” (世界 being the orthographic representation for “world”). Swift, and Rust also have UTF-8 support as a given. But of those three, golang has the smallest toolchain when compiling from scratch, is not preferentially coupled to a specific hardware vendor like Swift catering to Apple users, and I prefer go’s BSD license to Rust’s or Swift’s Apache license as a more well vetted, and simpler F/OSS license. Moreover, when building Rust from source, it has more dependencies, and takes about 45+ minutes to compile just to get a runtime environment. Contrasting this with golang in the same VM and build environment (FreeBSD FWIW) can be built from source in much less time (about 2–5 minutes from cd /usr/ports/lang/golang && su && (or if you prefer or have it installed: sudo) make install) and has far fewer dependencies. I have not yet tried to compile Swift from source personally, but their open source version is pretty recent if I recall correctly, and I am looking for paid work more than I am dabbling in programming projects for the most part (speaking of which, if you have work that is paid and features internationalization, localization, Japanese and English, system and network administration, operating system level design, and so on, please let me know!).

When I first got online more directly rather than using BBSes which merely had UUCP and NNTP feeds, but also via systems that had telnet and SLIP access, I sought out Japanese animation themed IRC channels. However, even before that would seek out Japanese animation themed BBSes and ran one myself as well as Co-Sysoped another, and was a Co-Sysop of an anime subsection of at least two other systems. ^^ Back then a lot of that sort of material was pretty obscure in North America. As a frame of reference, AnimeExpo, which is an annual convention focused on Japanese animation now in 2016 has well over 100,000 attendees. It began in 1991 as AnimeCon, and had fewer than 1000 attendees, and I was one of them. Moreover, even when I did find BBSes with Japanese animation themes in the 1991 time frame, they rarely if ever had much representative of EUC/JIS/Shift-JIS art, and even more rarely did I encounter much ANSI art trying to mimic such things. Thankfully, things today are a bit better, this website demonstrates an example of Japanese being correctly rendered from right to left, top to bottom, as would be standard in Japanese writing and typography going back for centuries, and it renders correctly in Webkit (aka Safari) as well as Chrome and Internet Explorer (and probably Firefox and other browsers as as well). It is not my site, but serves as inspiration for what Japanese language typography *should* look like orthographically, as well as makes me happy that it renders correctly on various browsers even on different operating systems. Command line tools, word processors and so on, should have similar goals for Japanese or CJKV orthographical representation, and if your OS or website can't render that, and does not have complete UTF-8 or unicode support, it is not catching up to where the bar has already been set: http://www.shizukuya.com/legacy/geocities/

Hopefully that helps provide some broader frame of reference from that time frame as well as catches everyone up to how Japanese and related character sets can be rendered in browsers today, but from the frame of reference of someone who was living in a different locale than Vancouver, BC? For reference, back then I think I was in the 408 prefix, but that region expanded and has had its area code changed since I was younger.

I hope this post has been informative and provided some context and perspective on non-ASCII non-ANSI art scene related issues as I saw them in the early 1990s.

Thanks for reading!

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.