Caveat: W(h)ither the blogs?

I had a horrible panic today. My blog(s) disappeared. My whole server disappeared. I couldn’t even access the “back end” via SSH, nor could I reboot it on the hosting website.
picture
I’ve been lazy, the past few months. I had no current backup files. The most recent backup was what I had made before leaving on the huge road trip, in November. That would be 4 months of blogging disappeared, if the server was trashed.
I opened a help ticket with the hosting provider.
After several hours, it turned out to be a problem at the provider. The host machine, where my virtual server lives, had some technical issue, I guess. Maybe a guy tripped on an extension cord. Who knows.
Anyway, nothing was lost. But it was a stressful panic.
I have subsequently created up-to-date backups for both blogs, and some other data on my server. I also decided to invest in a $5 / month backup plan. Who knows, maybe my server host arranged this crash to drive business to the backup plan business? *Shrug*
Such is life.

Caveat: Typepad broke my heart (and, not incidentally, broke my blog) today.

I found out today that my blog host, Typepad, has altered a functionality upon which I have relied heavily. I have, for the last 4 years, been hosting all my pictures "off-site" relative to the blog. This helps me keep them organized, helps me keep control of them from an "intellectual property" standpoint, and makes it easier to be "disaster resistant" in the event of problems with data integrity at the blog hosting server. 

It relies, however, on the blog host software respecting the integrity of the out-linking URLs for all those pictures.

If you scan through my blog today, you will see that all my picture links are broken. ALL OF THEM. Typepad doesn't approve of the link protocol of my picture-hosting server (i.e. it's "h t t p" rather than "https"). [And holy crap! – it's using some kludgey rewrite on the actual text of my blog entries – I can't even MENTION "h t t p" without it being "corrected" – hence the spaces in the mentions, here. So it's not only bad policy, it's bad programming, too!]

I have two possible solutions.

1) Migrate my photos to a different server, so that the links will work again (the current picture server doesn't accommodate the new, supposedly "more secure" URLs that Typepad is forcing on me)

2) Migrate my blog to a different host. All of it.

Both of these represent a lot of work.

I feel I can no longer trust my blog host with my data, however.

So much for the "lifetime guarantee." I have been with Typepad for 14 years. I had some intuition that it would come to a bad end, but I had hoped against hope it wouldn't. Such hopes were unfounded, as we can see.

I guess I have a new project to work on, to procrastinate on my taxes.

[daily log: walking, 4km]

Caveat: Oh windows registry, how I missed thee!

One thing I did when I was in Portland two weeks ago was I bought a new laptop computer. I wanted to buy in the US because I could get a laptop with the Windows operating system in English – if you buy a laptop in Korea, it will speak Korean to you (meaning all the system error messages, all the setup and config, etc.), and Microsoft has a ridiculous policy whereby if you want to change an operating system's language, you have to buy the operating system again!

The reason to buy in Oregon, specifically, is that Oregon has no sales tax. So I bought the computer there. And now I have it here in Korea.

After more or less being content with my Linux-based resurrection of my old Korean desktop, it's a bit rough transitioning my computer habits back to Windows. Of course, I use Korean-speaking Windows at work, but I won't be taking my desktop back to the US, so I needed a new laptop for all my home-based stuff, especially my geofictioning and server-development hobby, such as it is. 

Windows 10 is smooth and professional, of course, but it really gets on my nerves. It makes assumptions about the way a person might want to work, which run counter to how I prefer to work.

I have hacked the registry numerous times, already, to get it to behave the way I want. In each case, the steps are as follows: 1) hack the registry to make visible some system option that is already built into the system, 2) set the option the way I want. Why do they hide these options? 

First and foremost, I had to kill off the deeply annoying Cortana. What is this, Clippy on opioids? Smooth but insidious, I was compelled to kill it off during my first hours of ownership. I have since had to find ways to prevent the system from insisting on connecting to my Microsoft account (if I want to share things with Microsoft, I'll do so on a case by case basis, right?), to prevent it from going to screen saver when I leave my computer unattended (how is this not a default-accessible option?), and to better manage how it behaves with respect to its power-management options. 

Sigh. I'll get used to it. 

Meanwhile, just for the heck of it, I got it running dual monitors, by hooking the laptop up to my desktop monitor as well as the laptop's. Thus, in the picture below, I can do email and websurfing on one monitor, while I hack around on my server on the other monitor.

picture

[daily log: walking, 7.5km]

Caveat: Gitsoft

Microsoft is buying GitHub. If you've never worked in the field of software development or systems administration, this is meaningless to you. GitHub, however, is a remarkable and important website if you do things with computers at the level development. In my recent adventures with setting up my own fully functional Ubuntu server running the "OSM stack", GitHub was nigh indispensable.

My take on this acquisition can be simplified as follows:

  1. Good for Microsoft: looks like I won't be selling my Microsoft stock anytime soon. Like the company's other gestures toward the Linux ecosystem (e.g. SQL Server for Linux, the bash shell for Windows), it shows that the bigwigs at MS "get" where the best devs are at. Devs appear to be in the driver's seat in Redmond, and it shows in many of their decisions.
  2. Bad for me: in my role as a free software consumer, I'm preemptively depressed. At some point, gates are likely to appear on this once-upon-a-time opensource Mecca. Should I close my GitHub account now, or wait for Microsoft to send me a notification about my "free upgrade to a paid account" in the uncertain future?

[daily log: walking, 7km]

Caveat: maybe just stroll in the spring drizzle

I suppose I always have a bit of doldrums around the equinox.

This past weekend was singularly frustrating. I was trying do computer things, with my little server. Trying to port an application I was playing with to a different database. And I just failed to figure it out. I guess I'm doing this computer stuff partly to challenge myself, because I've grown frustrated with my other "hobbies," such as they are (e.g. learning Korean, the geofiction thing, writing). So then I feel frustrated with this, too. So I have to just back off and let myself be "unproductive" I guess.

I'll go out and stroll in the spring drizzle.

[daily log: walking, 7.5km]

Caveat: In the mud

[This is cross-posted from my other blog.]

This isn't exactly geofiction, but I was messing around with a new project on my server.

Way back in the day (I am somewhat old), I used to play MUDs (Multi-User Dungeons). These are text-based computer games of various kinds – no graphics at all. They're a kind interactive "choose your own adventure" text, you might say. But the game mechanics in them are the ancestors of modern MMORPGs such as World of Warcraft, and there are even conceptual connections to Grand Theft Auto or Minecraft.

MUDs are quite old – they existed on mainframes at businesses and universities before PCs were even a thing, starting in the 1960s. Because of my family's connection to the local university, I was a rare child of the 1970s who actually played computer games before PCs were invented! These were MUDs and other text-based games.

When I noticed I had my own server up and running, it occured to me that I could build a MUD. Possibly, it's not even that hard. Sure enough, there is open-source software that will run a MUD on a server for you.

I chose a package called CoffeeMud. I'm still messing with it. It's very unlikely I'll ever do anything with it. But I had this domain-name, "Hellbridge.com", lying around, so I thought, sure, make a MUD.

  _    _        _  _  _            _      _
| |  | |      | || || |          (_)    | |
| |__| |  ___ | || || |__   _ __  _   __| |  __ _   ___
|  __  | / _ \| || || '_ \ | '__|| | / _` | / _` | / _ \
| |  | ||  __/| || || |_) || |   | || (_| || (_| ||  __/
|_|  |_| \___||_||_||_.__/ |_|   |_| \__,_| \__, | \___|
__/ |
Hellbridge.com                   |___/
Powered by CoffeeMud v5.9

I originally acquired the "Hellbridge.com" domain for a quite different purpose: It was intended to be a "satire-website" for a place where I once worked, which had a name where "Hellbridge" was a similar sound but darker connotation. But looking at it now, I thought it would make a great name for a MUD. So there it is.

I liked the CoffeeMud package because the admin and config of the site is mostly done from within the game. That's cool. So I create an "Archon" character, who is like God. I walk around the MUD and type "Create chair" and a chair falls from the sky. Likewise with any other object, room, monster, or character class. That's fun.

The Archon character is created in the empty, default "Fresh Install" room, by reading a book that is placed there. I read the book and I became a God.

Nice book. Note the stats jump in the prompt.

<20Hp 100m 100mv> read book
The manual glows softly, enveloping you in its magical energy.
The book vanishes out of your hands.
<1403Hp 571m 595mv>

It's not live yet. It may never be. But meanwhile, I thought it was interesting to try it out.

It's a little bit like geofiction – you're creating an imaginary world, after all.

[daily log: walking, 7km]

Caveat: Some weeks…

[This is a cross-post from my other blog.]

And then, some weeks, I don’t get much done.

I started working on trying to customize my Rails Port (the main “copy” of the OpenStreetMap slippy map), and got very bogged down in the fact that the OpenStreetMap Rails Port is highly complex software written in a language and using an architecture unfamiliar to me: the infamous “Ruby on Rails.”

I dislike the way that the actual name “OpenStreetMap” is hard-coded throughout all the little modules. It seems like a poor application design practice, especially for an opensource project. One area where the name proliferates is in all the internationalization files. So I started wondering how hard it might be to get all these internationalization files to be more “generic.” The answer: pretty hard, at least for me.

I’ve wandered off down a digressive passage where I’m learning about software internationalization under the Ruby on Rails paradigm, but I’m undecided how I want to handle this. Do I want to try to solve it the “right way”? Or just kludge it (most likely by deleting all the internationalization files except perhaps English, Spanish, and Korean)?

Meanwhile I have also got pulled away by some non-computer, non-geofiction projects.

So… not much to report, this week – nothing mapped, nothing coded, nothing configured.

Music to map by: Sergei Rachmaninoff, “Piano Concerto No. 2.”

CaveatDumpTruck Logo

Caveat: rich in entropy

We live in a weird era. Entropy has become a kind of commodity in and of itself.

In all this messing around with my new server… with trying out new things and tinkering with it all… well, of course I have to educate myself a bit about server security. It's a big, bad world out there, and if I'm going to be running a server that's publicly visible on the internet (offering up webpages, etc.) the little machine will be lonely and vulnerable, and I have to think about how to protect it from bots and blackhats.

In the field of network security, one thing that comes up is that you have to have some fundamental understanding of the types of cryptography used these days to secure systems. There's a whole infrastructure around generating "secure" public and private keys that computers hoard and exchange with one another to authenticate themselves. I really DON'T understand this, but I wade through the documentation as I e.g. try to set up a certificate authority on my server, because some of the things I'm running there apparently require it. I run the commands they tell me to, and hopefully my little server is sorta secure. But who knows.

I was fascinated to learn, however, about a thing that is used in crypto key generation on computers: system "entropy."

On one site I was looking at, there was a discussion about the fact that virtual machines (the sorts you rent from big companies to run cheap little servers, as I have done) have extremely low "available entropy" while your typical crummy desktop has very high "available entropy" – therefore when I generate my keys, I should do so on my desktop, not my server – I can upload the generated keys to my server later.

I think it's kind of a funny concept. The mass-produced, cookie-cutter, high quality, reliable servers found on the giant server farms are lacking in a certain commodity that they desperately need for their security: entropy. So the admins have to go out to their desktops to get the entropy they need. I sit here and I listen to my cruddy, 7 year old Jooyontech Korean PC-clone desktop, with its perpetually failing CPU fan groaning intermittently and the weird system noises filtering though the sound channel onto my speakers, and I can rest assured that that's all part and parcel of having lots and lots of good, tasty entropy that I can feed to my server in the form of so many sweet, generated security keys.

One site I was reading said that typical desktop entropy should be around 2000 (in whatever units entropy is measured with…).

Out of curiosity, I plugged in the Linux command that would tell me my system's entropy. I got 3770. Wow! I'm rich! … in entropy, anyway.

Meanwhile, my server, a virtual machine in some well-air-conditioned server farm facility across the Pacific in California, manages only 325 units of entropy. So sad. The chaos-poor, withered fruits of conformity.

[daily log: walking, 7km]

Caveat: A more technical summary of how I built my tileserver – part 2

[The following is a direct cross-post from my other blog – just so you don’t think I’m doing nothing with my free time, these days.]
[Update 20180923: continues from here]

The objective

I started discussing the coastline shapefile problem in first post.

Early on, I found the tool called QGIS Browser and installed it on my desktop. I used this to examine the shapefiles I was creating.

The first step was to look at the “real Earth” OpenStreetMap-provided shapefiles I was trying to emulate – the two mentioned in my previous post:
/openstreetmap-carto/data/land-polygons-split-3857/land_polygons.shp
and
/openstreetmap-carto/data/simplified-land-polygons-complete-3857/simplified_land_polygons.shp

Here are screenshots of each one.

First, land_polygons.shp

picture

And here is simplified_land_polygons.shp

picture

The structure is pretty straightforward, but – how do I make these? Where do they come from? – aside from the non-useful explanation found most places, which is that “OpenStreetMap generates them from the data”.

The coastline problem

The way that the shapefiles are generated for OpenStreetMap are not well documented. But after looking around, I found a tool on github (a software code-sharing site) developed by one of the OpenStreetMap gods, Jochen Topf. It is called osmcoastline, which seemed to be the right way to proceed. I imagined (though I don’t actually know this) that this is what’s being used on the OpenStreetMap website to generate these shapefiles.

The first thing I had to do was get the osmcoastline tool working, which was not trivial, because apparently a lot of its components and prerequisites are not in their most up-to-date or compatible versions in the Ubuntu default repositories.

So for many of the most important parts, I needed to download each chunk of source code and compile them, one by one. Here is the detailed breakdown (in case someone is trying to find how to do this).

Installing osmcoastline

I followed the instructions on the github site (https://github.com/osmcode/osmcoastline), but had to compile my own version of things more than that site implied was necessary. Note that there are other prerequisites not listed below here, like git, which can be gotten via standard repositories, e.g. using apt-get on Ubuntu. What follows took about a day to figure out, with many false starts and incompatible installs, uninstalls, re-installs, as I figured out which things needed up-to-date versions and which could use the repository versions.

I have a directory called src on my user home directory on my server. So first I went there to do all this work.
cd ~/src

I added these utilities:
sudo apt-get install libboost-program-options-dev libboost-dev libprotobuf-dev protobuf-compiler libosmpbf-dev zlib1g-dev libexpat1-dev cmake libutfcpp-dev zlib1g-dev libgdal1-dev libgeos-dev sqlite3 pandoc

I got the most up-to-date version of libosmium (which did not require compile because it’s just a collections of headers):
git clone https://github.com/osmcode/libosmium.git

Then I had to install protozero (and the repository version seemed incompatible, so I had to go back, uninstall, and compile my own, like this):

Git the files…
git clone https://github.com/mapbox/protozero.git
Then compile it…
cd protozero
mkdir build
cd build
cmake ..
make
ctest
sudo make install

I had to do the same for the osmium toolset:

Git the files…
git clone https://github.com/osmcode/osmium-tool.git
Then compile it…
cd osmium-tool
mkdir build
cd build
cmake ..
make

That takes care of the prerequisites. Installing in the tool itself is the same process, though:

Git the files…
git clone https://github.com/osmcode/osmcoastline.git
Then compile it…
cd osmcoastline
mkdir build
cd build
cmake ..
make

I had to test the osmcoastline tool:
./runtest.sh

Using osmcoastline for OGF data

So now I had to try it out. Bear in mind that each command line below took several hours (even days!) of trial and error before I figured out what I was doing. So what you see looks simple, but it took me a long time to figure out. In each case, after making the shapefile, I would copy it over to my desktop and look at it, using the QGIS browser tool. This helped me get an in intuitive, visual feel of what it was I was creating, and helped me understand the processes better. I’ll put in screenshots of the resulting QGIS Browser shapefile preview.

To start out, I decided to use the OGF (OpenGeofiction) planet file. This was because the shapefiles were clearly being successfully generated on the site, but I didn’t have access to them – so it seemed the right level of challenge to try to replicate the process. It took me a few days to figure it out. Here’s what I found.

Just running the osmcoastline tool in what you might call “regular” mode (but with the right projection!) got me a set of files that looked right. Here’s the command line invocation I used:
YOUR-PATH/src/osmcoastline/build/src/osmcoastline --verbose --srs=3857 --overwrite --output-lines --output-polygons=both --output-rings --output-database "YOUR-PATH/data/ogf-coastlines-split.db" "YOUR-PATH/data/ogf-planet.osm.pbf"

Then you turn the mini self-contained database file into a shapefile set using a utility called ogr2ogr (I guess part of osmium?):
ogr2ogr -f "ESRI Shapefile" land_polygons.shp ogf-coastlines-split.db land_polygons

This gives a set of four files
land_polygons.dbf
land_polygons.prj
land_polygons.shp
land_polygons.shx

Here is a view of the .shp file in the QGIS Browser. Looks good.

picture

I copied these files into the /openstreetmap-carto/data/land-polygons-split-3857/ directory, and I tried to run renderd. This alone doesn’t show the expected “ghost” of the OGF continenents, though. Clearly the simplified_land_polygons.shp are also needed.

So now I experimented, and finally got something “simplified” by running the following command line invocation (note setting of –max-points=0, which apparently prevents the fractal-like subdivision of complex shapes – technically this is not really “simplified” but the end result seemed to satisfy the osm-carto requirements):
YOUR-PATH/src/osmcoastline/build/src/osmcoastline --verbose --srs=3857 --overwrite --output-lines --output-rings --max-points=0 --output-database "YOUR-PATH/data/ogf-coastlines-unsplit.db" "YOUR-PATH/data/ogf-planet.osm.pbf"

Again, make the database file into shapefiles:
ogr2ogr -f "ESRI Shapefile" simplified_land_polygons.shp ogf-coastlines-unsplit.db land_polygons

This gives another set of four files
simplified_land_polygons.dbf
simplified_land_polygons.prj
simplified_land_polygons.shp
simplified_land_polygons.shx

And this .shp looks like this:

picture

Now when I copied these files to the /openstreetmap-carto/data/simplified-land-polygons-complete-3857/ directory, and re-ran renderd, I got a successful ghosting of the continents in the render (no screenshot, sorry, I forgot to take one).

Using osmcoastline for my own data

Now I simply repeated the above, in every respect, but substituing my own rahet-planet.osm.pbf file for the ogf-planet.osm.pbf file above. I got the following shapefiles:

land_polygons.shp

picture

simplified_land_polygons.shp

picture

And these, copied to the appropriate osm-carto data directory locations, gives me the beautiful render you see now. [EDIT: Note that the view below of the Rahet planet is “live”, and therefore doesn’t match what shows in the screenshots above. I have moved in a different concept with my planet, and thus I have erased most of the continents and added different ones, and the planet is now called Arhet.]

I actually suspect this way that I did it is not the completely “right” way to do things. My main objective was to give the osm-carto shapefiles it would find satisfactory – it was not to try to reverse-engineer the actual OSM or OGF “coastline” processes.

There may be something kludgey about using the output of the second coastline run in the above two instances as the “simplified” shapefile, and this kludge might break if the Rahet or OGF planet coastlines were more complex, as they are for “Real Earth.” But I’ll save that problem for a future day.

A more immediate shapefile-based project would be to build north and south pole icecaps for Rahet, in parallel with the “Real Earth” Antarctic icesheets that I disabled for the current set-up. You can see where the icecaps belong – they are both sea-basins for the planet Rahet, but they are filled with glacial ice, cf. Antarctica’s probably below-sea-level central basin. And the planet Mahhal (my other planet) will require immense ice caps on both poles, down to about 45° latitude, since the planet is much colder than Earth or Rahet (tropical Mahhal has a climate similar to Alaska or Norway).

Happy mapping.

Music to map by: Café Tacuba, “El Borrego.”

CaveatDumpTruck Logo

Caveat: Progress Made! – The map got served…

[This is a cross-post from my other blog.]

[Update 20180923: continued from here]

The OSM “Rails Port” is now running on my server, and I have successfully connected to the api via JOSM and rebuilt my test-version of my planet, Rahet.

It took me an entire week of googling and meditating before I solved the port problem. Ultimately, I was looking in the wrong place for clear documentation about it – I was hoping someone would write about it from the perspective of Rails, but finally I found the documentation that made it possible on the Passenger website, buried in an example. There’s a line that belongs in the apache config file, “PassengerRuby /usr/bin/ruby2.3” (or whatever version).

And that made all the difference.

Here’s the link: MAP. [UPDATE 20210530: that link is broken – the test server is closed down. I am running a “live” planet for multiple users based on this original, at arhet.rent-a-planet.com]

So now you can look around. It’s just the “out-of-the-box” OpenStreetMap website (AKA Rails Port), with some minimal customization where I could find where to do it easily. I’ll continue working on that. I might actually disable the iD and Potlatch editing tools – I always use JOSM, and if it ever reaches a point where I’m allowing or inviting others to edit, I would make JOSM-use a prerequisite, I’m certain. JOSM, with its steep learning curve, seems like it would be a good way to “filter” people on the question of how serious they’re taking a project.

There are a number of features that don’t work. I would like to figure out a way to disable the user sign-up page. That’s a kind of vulnerability for the types of use I’m intending for this set-up. Meanwhile, I’ve disabled in a rather inelegant way by “breaking” the sign-up page (by changing its name inside the appropriate folder on the app/views/user path).

I’m happy.

I’ll write up how I figured out the coastline problem, tomorrow, and begin working on deciding what features to retain versus which to change in the Rails Port (i.e. think about customization).

[Update 20180923: continues here]

Music to map by: Run The Jewels, “Talk To Me.”

CaveatDumpTruck Logo

Caveat: Not making progress

[This is a cross-post from my other blog.]

I wanted to post a part 2 for my last post, about how I got the tileserver working. I was going to talk about coastlines. In fact, my tileserver IS working, but it feels a bit useless without the other half: the so-called Rails Port.

So I have become obsessed with trying to get the Rails Port running. And I keep running into problems. The fundamental problem is that I have never used Ruby (and/or “Ruby on Rails”) before. I don’t really understand it. It’s not a development environment I have any comfort with at all. I don’t really even get the overall model.

I can get a local version of the generic “openstreetmap-website” code running on port 3000 on my desktop. And I can get a similar “development” version running on my server. But I don’t know all the places I need to edit to get the Rails Port to “look at” my tile server and not the default OSM tileservers. And I don’t know what other files I need to customize to control e.g. users, site security, name presentation, etc.

I think I’m going to have to take a timeout on trying to set this up, and spend some time learning how to deploy a much simpler Ruby app on my server.

One bit that seems like it should be utterly trivial is how to get the application to present on port 80 (standard webpage) instead of port 3000 (Ruby’s default development port, I guess). I have installed Passenger for Apache and that’s how I can present the application on port 3000, but I guess Rails doesn’t cohabit well with other applications on Apache – e.g. the wiki, this blog, etc. So somehow… it has to get “wrapped” or “proxied” but the details of how to configure this are beyond my expertise.

I’m frustrated, so I’m going to take a break from this server stuff.

Music to map by: 박경애 – 곡예사의 첫사랑

CaveatDumpTruck Logo

Caveat: A more technical summary of how I built my tileserver – part 1

[This is a cross-post from my other blog]

I thought I should put a discussion of how I did this, with much more detail, as I am sure there are other people out there in the world who might want to do something similar.

This is part 1. I’ll post part 2 later.

Background

I wanted to be able to serve Openstreetmap-style map tiles of my own fictional planet, in the same way that the site OpenGeofiction does, but using my own data set.

This process of building a tileserver is separate from the job of setting up an Openstreetmap-style apidb database to be able to edit the data set using tools such as iD, Potlatch, or JOSM. I’m still working on that.

Platform and Preliminaries

I deliberately set up my server on Ubuntu 16.04 (a flavor of Debian Linux) because I knew that OpenGeofiction runs in this environment. I’m not actually sure, but I assume Openstreetmap does too, though, given its scale, that may not be exactly the case, anymore – more likely it’s got a kind of customized, clustered Linux fork that has some genetic relationship to Ubuntu.

I thought it would therefore be easier to replicate the OpenGeofiction application stack.

Before starting this work, I had already installed MySQL and Apache and Mediawiki – except for Apache, however, these are not relevant to setting up a tileserver.

I had also already set up PostgreSQL (the preferred Openstreetmap database server), so the preliminary mentions of setting up this application were skipped.

Finally, using Apache’s sites-available config files and DNS, I had set up a subdomain on my server, tile.geofictician.net, to be the “outside address” for my tileserver. This will hopefully mean that if I ever decide to separate my tileserver from other things running on the same server, it will be somewhat easier to do.

S2OMBaTS with Deviations

Starting out, I mostly followed the steps and documentation at switch2osm.org’s detailed tutorial, here. Below, I refer to this page as S2OMBaTS (“switch2osm manually building a tile server”).

So I don’t see any need to repeat everything it says there. I just followed the steps given on that webpage exactly and religiously. What I’ll document are only the spots where I had to do something differently. These are my “deviations.”

  1. Where S2OMBaTS suggests creating a ‘renderaccount’ on the server to own all the tileserver-related directories and tools, I used my non-root regular username. I’m not sure this is good practice, and if I were setting something up as a “production” environment, I’d be more careful to segregate ownership of this collection of files, applications and services.
  2. There are some problems with authenticating a non-root user for PostgresSQL (‘root’ being the infamous ‘postgres’ superuser). I had to edit the /etc/postgresql/9.5/main/pg_hba.conf file so that the authentication method was “trust”[css]
    # Database administrative login by Unix domain socket
    local all postgres trust# TYPE DATABASE USER ADDRESS METHOD# “local” is for Unix domain socket connections only
    local all all trust
    [/css]

    I think this might be a bad solution from a security standpoint, but it’s the only one I could find that I understood and could get to work. PostgreSQL security is weird, to me. My DBA experience was entirely with SQLServer and Oracle, back in the day, and those databases’ security are integrated to OS security more tightly, I think. Similarly, MySQL seems to assume linkages between system users and database users, so security for the matched pairs of users are linked. But it seems like PostgreSQL doesn’t work that way.

  3. Where S2OMBaTS suggests using the URI=/hot/ in the /usr/local/etc/renderd.conf file (which seems intended to hijack other applications’ support for the already-existing “HOT” – Humanitarian Openstreetmap Team – layer). I used URI=/h/ instead, which was entirely arbitrary and I could just as easily have used something more meaningful, as at OpenGeofiction, with e.g. URI=/osmcarto/.
  4. To test my installation, of course, I had to load some test data. S2OMBaTS uses a geofabrik snapshot of Azerbaijan. I decided just for the sake of familiarity, to use a snapshot of South Korea. I had to spend quite a bit of time researching and tweaking the individual osm2pgsql options (parameters) to get it to run on my itty-bitty server, even for a fairly small dataset like South Korea’s OSM snapshot, so here’s the osm2pgsql invokation I used to load the data (YMMV).
    osm2pgsql --database gis --create --slim  --multi-geometry --keep-coastlines --hstore --verbose --tag-transform-script ~/src/openstreetmap-carto/openstreetmap-carto.lua --cache 2500 --cache-strategy optimized --number-processes 1 --style ~/src/openstreetmap-carto/openstreetmap-carto.style ~/data/south-korea-latest.osm.pbf
    

At this point, I reached the end of the S2OMBaTS tutorial.

Loading my own planet

I then had to customize things to load my own planet instead of a largely “naked earth” with South Korea well-mapped. The first step was easy enough. I just replaced the South Korea pbf extract with a pbf of my own planet, and re-ran the osm2pgsql step. I got the pbf extract of my planet by working with some kludges and with JOSM on my desktop machine. It was a “simplified” planet – just the continent outlines, a few cities, two countries with their admin_level=2 boundaries, and one tiny outlying island with lots of detail, which I borrowed from my city-state Tárrases at OpenGeofiction. It was composed as a kind of “test-planet” to keep things simple but hopefully test most of what I wanted to achieve in my tileserver.

Here’s the load script for that (essentially the same as used for South Korea, above).

osm2pgsql --database gis --create --slim  --multi-geometry --keep-coastlines --hstore --verbose --tag-transform-script ~/src/openstreetmap-carto/openstreetmap-carto.lua --cache 2500 --cache-strategy optimized --number-processes 1 --style ~/src/openstreetmap-carto/openstreetmap-carto.style ~/data/gf-planet.osm.pbf

The problem, of course, is that if you run the render at this point, you get all the features of the new planet, but the continent outlines and the land-water distinction is “inherited” from earth. That’s because the mapnik style being used is referencing the shapefiles produced by and downloaded from Openstreetmap. The creators of the Openstreetmap software, including the OSM carto style, didn’t take into account the possibility that someone would try to use their software to show a map of somewhere that wasn’t planet Earth, and consequently, these need for these shapefiles is “hardcoded.” So the “Earth” shapefiles have to be substituted by alternate shapefiles extracted from the alternate planet dataset.

Customizing Coastlines and Shapefile Hell

This was the hardest part for me. It took me more than a week to figure it all out. I’m not experienced with shapefiles, and don’t really understand them, and the process by which shapefiles get extracted from the OSM global dataset in a format that can be used by the openstreetmap-carto mapnik style is very poorly documented, online. So it was a lot of google-fu and experimentation, and downloading QGIS and teaching myself a bit about shapefiles, before I could get things working. It’s not clear to me that I really did it the right way. All I can say is that it seems to work.

The first steps I took were to try to simplify my task. I did this by chasing down the shapefile dependencies in the mapnik style sheet, and manually removing the ones that seemed less important. I did this mostly through trial and error.

The only file that needs to be edited to accomplish this simplification is the main mapnik xml file: <YOUR-PATH>/openstreetmap-carto/mapnik.xml. Bear in mind, though, that this file is the output of the carto engine (or whatever it’s called). By editing it, I have “broken” it – I won’t be able to upgrade my OSM carto style easily. But this is just a test run, right? I just wanted to get it to work.

So I edited the <YOUR-PATH>/openstreetmap-carto/mapnik.xml file and deleted some stuff. You have to be comfortable just going in and hacking around the giant xml file – just remember to only delete things at the same branch level of the tree structure so you don’t end up breaking the tree.

I removed the <Style></Style> sections that mentioned Antarctic icesheets – there were two. As things stand, my planet has no Antarctic icesheets, so why try to incorporate shapefiles supporting them?

Then, I eliminated the<Style></Style> section mentioning the <YOUR-PATH>/openstreetmap-carto/data/ne_110m_admin_0_boundary_lines_land directory, since these are land-boundaries for Earthly nations. I figured if I couldn’t see land-boundaries for my planet’s nations at low zooms, it would be no big deal. It’s not clear to me that this has been implemented on OpenGeofiction, either.

I also discovered that in fact, this file doesn’t even point to the <YOUR-PATH>/openstreetmap-carto/data/world_boundaries directory. So there was no need to worry about that one.

So that left me with two shapefiles I had to recreate for my own planet’s data:
<YOUR-PATH>/openstreetmap-carto/data/land-polygons-split-3857/land_polygons.shp and <YOUR-PATH>/openstreetmap-carto/data/simplified-land-polygons-complete-3857/simplified_land_polygons.shp.

Let’s just summarize by saying that this is what took so long. I had to figure out how to create shapefiles that the mapnik style would know what to do with, so that my continents would appear on the render. It took a lot of trial and error, but I’ll document what’s working for me, so far.

*** To be continued ***

[Update 20180923: Continues here]

Music to map by: Héctor Acosta, “Tu Veneno.

CaveatDumpTruck Logo

Caveat: Testing the leaflet widget on the blog

[This is a cross-post from my other blog (see previous blog entry)]

Here’s a live leaflet of my own tileserver with my own planet (stripped of detail because I want my database small as I test things). Welcome to Rahet. UPDATE, OCTOBER 2019: Being a dynamic window on the map, rather than a snapshot, means that since the “planet” shown is much changed, this view is not the view that existed when this blog post was written.

Here’s a view of Tárrases over at OGF on standard layer.

Here’s a view of Tárrases over at OGF on Topo layer. [UPDATE 20210530: The OGF Topo layer is no longer functioning.] [UPDATE2 20230315: The OGF Topo layer is once again functioning, and has been for over a year.]

That’s pretty cool.

Music to map by: Cold, “Bleed.”

CaveatDumpTruck Logo

Caveat: What am I doing!?

[This is a cross-post from my other blog (see previous blog entry)]

A few weeks ago, I decided to just go ahead and try to replicate the “OpenGeofiction Stack” by building my own server.

So I shelled out ₩25000 KRW ($20 USD) a month for a low-end Linux server from one of the many companies that rent out cheap servers. It’s running Ubuntu 16.04.

I happened to have already bought, some years ago, the domain name ‘geofictician.net’, so I attached this name to my server, and I created some subdomains. I applied my moribund artistic skills and sketched up a little logo for the website, too. That’s also on this blog (at upper left). It’s a freehand drawing, but imitating some other images I looked at.

First I loaded the standard LAMP stack (MySQL and Apache), and I then installed mediawiki and configured. I made a kind of “clone” of the OGF wiki. I even uploaded some of the articles I’d deleted from the main site. I managed to get the MultiMaps extension fork that Thilo built running, so I can point those wiki articles at OGF.

The one thing I’m frustrated with, in the wiki, is that the email user utility was impossible to configure to work with my postfix install on the server. Hence, for now, I’ve got the wiki using my gmail account to send emails, which I think isn’t an ideal solution. Then again, it doesn’t really matter, for now, because it’s just me, using the wiki alone.

Anyway, I think I’ll use this wiki to write all the overwikification I’ve felt compelled to refrain from writing on the OGF wiki. Maybe I’ll build a bot and make stubs for ALL of my locales on the map (8000 stubs! Now that’s overwikification).

Next, I started building a tile server. This was pretty complicated, and I don’t consider the task complete. I did manage to upload an OSM file of a planet I started building using JOSM a few years back and that hasn’t ever been “rendered” before, though I’ve been drawing paper maps of parts of this planet since I was in middle school.

Finally, a few days ago, I was able to test the success of the tile render by connecting to the tileserver using JOSM from desktop. It was quite exciting to see my long-languishing planet, Rahet, rendered in JOSM, if only in the most skeletal of forms:
picture
Earlier today, I installed this blog, using the standard wordpress package, and it went quite smoothly. Good-bye, bliki.

I’m currently working on getting the OSM “Rails Port” up and running. I just ran a test and got some errors, so I’ll have to troubleshoot those. But I feel like the end is in sight.

Music to map by: 마마무, “1cm의 자존심.”

CaveatDumpTruck Logo

Caveat: Geofictician

I decided to start a separate blog on my new website.

There is a long history of me creating new "blogs" for one specific purpose or another. The longest-lived of my alternate blogs was the one I maintained for my job and students for several years. That blog still exists but it's largely dormant.

The reason for this new blog is that, although I don't mind sharing my geofiction activities here on this blog, I'm not sure how open I want to be about the rest of my life with fellow members of the geofiction community where I participate. That is, do they want to see or do they care to see my poetry, my ruminations of day-to-day classroom life, my oddball videos and proverb decipherments? 

Since I think it's better to keep those things separate, I decided to make a separate blog. I also did it just to support the "technical unity" (if you will) of the website I've been constructing. 

I may develop a habit of allowing the things I post on that other blog to appear here, but not vice versa. This blog would be the comprehensive "all Jared" blog, while that would be a kind of filtered version for the geofiction community. 

Anyway, here's the blog (blog.geofictician.net), which currently has 4 posts, created over the weekend. Note that it seems like this blog will be fairly technical, representing the most abstruse aspects of my bizarre and embarrassing hobby, which might be termed "computational geofiction."

[daily log: walking, 7km]

Caveat: Logofication

GF Logo

I designed a "logo" for my new website, this morning. The drawing is not really original – it's a free-hand consolidation of several images found online. For all that, I'm moderately pleased with the result, as a first draft.

I'm least happy with the vertical lettering – but the constraints of the drawn image, combined with fact that the logo needs to be square, made this the most reasonable approach, I thought. I'll work on it more, at some point.

[daily log: walking, 7km]

Caveat: more hackings

Lately, I feel like I've been "on a roll" with respect to technical undertakings. And meanwhile, my creative efforts have been falling painfully flat. So I've shifted my efforts in my free time from creative work (writing, my geofictions, etc.) to computer tinkering. This is the sort of behavior that led to an entire somewhat-successful but ultimately stressful career in the 2000s.

I decided, in light of this, to go ahead and invest a few dollars a month in a hosted Linux Ubuntu Server. In fact, I already have another hosted server, but it's on Windows, which I'm less skilled at hacking, and mostly I use it as a ftp spot for backing up my data and as an image server for my blog.

Having an Ubuntu server allows me to deploy actual websites and apps rather than just have them running on my desktop. And, frankly, a hosted server is a lot more reliable than my desktop – assuming I don't blow it up through some hackerish ineptitude. 

So I clicked "Buy" early this morning and I made a server. 

I got one of my long-neglected domain names pointed to it: geofictician.net. I hope to get other of my domains pointed there too, over time. I have a half-dozen domains that I basically barely use: jaredway.com, raggedsign.net (my old business), caveatdumptruck.com (which points to this blog), etc.

So far, all that's installed is a skeleton of a mediawiki instance (i.e. an "empty" wikipedia, basically). That's because those things are really easy to install and give a very professional-looking result "out of the box." I'll see what else I can get installed, later.

Here's a link:

https://geofictician.net/wiki

See? It's really there.

[daily log: walking, 7km]

Caveat: No getting off this machine

I decided to spend the day making a backup file of this here blog thingy™. That’s 14 MB of just text – not even counting the pictures, which should be backed up separately (and which I still need to work on, because there’s no quick extract of those files, except in the case of the newer ones where I proactively adopted a more back-up-able storage method).

That’s what I get for 13 years of blogging.

Then, just for the hell of it, since I happen to have a webserver (apache) and appropriate database (MySQL) running on my desktop at the moment, I hacked together a wordpress instance and “published” a clone of my blog on my desktop, just to prove I still have a few technical skills. It took me about 3 hours to get a passable instance (sans local pictures) up and running. Anyway, I can say that as long as I keep my backups up-to-date, my blog is fully recoverable even if my bloghosting company meets a bad end for whatever reason. Here’s a screenshot of the CAVEATDVMPTRVCK doppelgänger I slapped together:

picture


What I’m listening to right now.

Younger Brother, “Train.”

Lyrics.

The world flashing past
So many others moving so fast
I feel my heart slow
As we go under I don’t bow down
Nothing left undecided
Upon steel on steel
Through these cuts in me I’ve
No question who was here first
Many dreams many lifetimes
Any of which could be me
Accept that I’m the one unable to move upon this machine
Upon this machine

On a caravan in motion
Alone in crowds just like me
I fell between the moments
I fell between the ideas
Cuz when it runs around the windows
Nothing here is still
All the patterns colliding
Through these villages and hills
So many dreams so many lifetimes
Any of which could be me
Accept that I’m the one unable to move upon this machine

There’s no one else, no one else, no one else, no one else
There’s no one else, no one else, no one else, no one else

There’s no one else, no one else, no one else, no one can help me now
Help me I’m stuck in this moving thing
Nothing is what it seems
No getting off this machine

picture[daily log: walking, a little]

Caveat: At Cave…

Today is one of those typical middle-of-the-week one day holidays: Korean independence day (which is to say, on March 1, 1919, Koreans declared independence – but they didn't achieve it until 1945 when the Americans nuked Japan).

I celebrate my own potential for independence by sitting and lurking in my little 9th floor cave, spending far too much time "hacking" on my computer.

I guess I see the value in doing this, in that I keep up an alternative set of skills, should this current "career" as a teacher ever become unsustainable for whatever reason.

So… I have been installing some development and deployment tools: I have both PostgreSQL and MySQL databases running, I have the mediawiki instance running, and now I have the nice Ruby-based open-source GIS web package up and running: the OpenStreetMap architecture, with both presentation (website with google-style "slippy map", but just on my desktop so far) and back-end (map tile generation – very rough, but working), though I haven't yet figured out how to customize or personalize it in any way – I have literally NEVER done anything with Ruby (website development language) before today. But… well, a website is a website, right? How hard can it be to figure out?

I think the next step is to install some kind of IDE. So far, I've just been doing everything with the linux terminal and gedit (a text editor like Windows' notepad).  I have never been fond of IDEs, but I doubt it's possible to work with anything of this level of complexity in the old, "hacker" style. I did use IDEs for my SQL dev work in the 2000s (mostly Visual Studio).

[daily log: walking, from directory to directory on my too-small hard drive]

Caveat: en es ko

Well, it took me more than six months to get around to it, but over this past weekend I finally resurrected my Linux desktop. I had managed to break it while trying to expand the size of the linux OS partition on my hard drive, and had been too lazy to go in and rescue all the old files and resurrect it. Instead, all this time have been unhappily limping along with the Windows 7 “Korea” edition that was native to my home desktop PC. I guess from a day-to-day “surf the internet” functionality, it was fine, but lately I’ve been wanting to get back to doing something more productive with some programming (er…  really just hacking around with things) in support of my moribund geofiction hobby. As such, having a functional Ubuntu Linux desktop is pretty much indispensable.
In fact, once I’d backed up all my files to an external drive, which was merely tedious, the re-install was mostly painless. As before, the most painful thing for me with Linux is language and keyboard support issues. I cannot function, now, without having Korean and Spanish language keyboard options – I still do some writing in Spanish, of course, and although my Korean remains lousy in qualitative terms, it’s nevertheless a ubiquitous aspect of my daily existence, and being able to type it comfortably is essential.
Each time I try to get the Korean keyboard and language options to work on a Linux install, it goes differently. It feels like a kind of hit-or-miss affair, where I keep trying various gadgets and settings in all possible combinations until I get one that works. This inevitable confusion was not helped by the fact that unlike last time, where I used Ubuntu’s native “Unity” desktop, I opted this time to try the so-called Cinnamon desktop (part of the “Mint” distro, a fork of Ubuntu). This was because I’d heard that Unity was not much longer for this world, and that Canonical (the creators of Ubuntu) intended to go out of the desktop-making biz.
Linux (at least these Ubuntu distros) make a distinction between “language setting” (which is fundamentally useless for controlling how the system reads the keyboard, as far as I can tell) and “input method” – which is what you need. But these two subsystems don’t seem to talk to each other very well.
The peculiar result I achieved after a few hours of dinking around, this time, is possibly unique in the entire world. I have my Ubuntu 16.04 with Cinnamon desktop, where the “system language” is English, the “regionalization” is Korean, the “keyboard” is Spanish, and the “input method” is Korean. This is pretty weird, because my physical keyboard is, of course, Korean. So for my regular day-to-day typing, the keys (except the letters proper) don’t match, since all the diacritics and symbols and such are arranged quite differently on a Spanish keyboard. But I’ve always been adept at touch typing, and I know the Spanish layout mostly by heart. Then when I want to type Korean, I hit the “hangul” key (which the “Spanish keyboard” can’t “see” since Spanish keyboards don’t have “hangul” keys) and that triggers the Korean part of the IBus input widget, and I can type Korean. It sounds bizarre, but it’s the most comfortable arrangement of keyboard settings I’ve ever managed, since there’s never any need to use a “super” shortcut of some kind to toggle between languages – they’re all running more-or-less on top of each other in a big jumble instead of being segregated out.
I hate to say it, but I didn’t take notes as to how I got here – so I can’t even tell you. I just kept trying different combinations of settings until one worked. I messed with the “Language Support”, the “IBus Preferences”, the “Keyboard” (under “System Settings), and the System Tray.
Anyway, I took a screenshot of my system tray, where you can see the whole resultant mess in a single summary snapshot.
picture
I now have a full-fledged Mediawiki instance up-and-running on the desktop (you can visualize a sort of “empty” wikipedia – all the software, but no information added into it). I’ve even configured the OGF-customized “slippy map” embeds for it (I managed that once on this here blog, too). I’m currently trying to get a PostgreSQL database instance working – MySQL is running but PostgreSQL has better GIS support, which is something I’m interested in having.
So there, you see a sometime hobby of mine, in action once again after a sort of winter hibernation, I guess.
picture[daily log: walking, 7km]

Caveat: Don’t tell God your plans

I finally got motivated to repair my linux desktop that I broke about 6 months ago. So… I'm obsessively messing around with config files and various arcana of Ubuntu Linux.

What I'm listening to right now.

David Bowie, "No Control."

Lyrics

Stay away from the future
Back away from the light
It's all deranged – no control

Sit tight in your corner
Don't tell God your plans
It's all deranged
No control

If I could control tomorrow's haze
The darkened shore wouldn't bother me
If I can't control the web we weave
My life will be lost in the fallen leaves

Every single move's uncertain
Don't tell God your plans
It's all deranged
No control

I should live my life on bended knee
If I can't control my destiny
You've gotta have a scheme
You've gotta have a plan
In the world of today, for tomorrow's man

No control

Stay away from the future
Don't tell God your plans
It's all deranged
No control

Forbidden words, deafen me
In memory, no control
See how far a sinful man
Burns his tracks, his bloody robes
You've gotta have a scheme
You've gotta have a plan
In the world of today, for tomorrow's man

I should live my life on bended knee
If I can't control my destiny
No control I can't believe I've no control
It's all deranged

I can't believe I've no control
It's all deranged
Deranged
Deranged

[daily log: walking, 1km]

Caveat: Disassembly

picture

One thing I spent some time on over my little holiday was in trying to cannibalize my old notebook computer.

I had this idea I could take out the hard drive and install it into my desktop. The notebook computer is the one I bought immediately prior to coming to Korea, in 2007, so it is 10 years old. The screen died a few years ago so I quit using it, but I had it lying around with this idea to salvage the hard drive, and I finally attempted it.

I succeeded in extracting it, and plugged it into my desktop (using the CD/DVD drive connectors – they're all standard connections, and plug together like legos). But the drive was unreadable. I guess it decayed or froze up or something.

Anyway it was fun taking the old notebook apart. In the picture: the hard drive on the lower right, the CPU is the little square thing in the middle right. It was a good computer, and served me well. At one point, I had it running three operating systems (triple boot: Linux, Windows 97, Windows Server). 

[daily log: walking, 7km]

 

Caveat: Tárrases

I’m not exactly in the closet about my geofiction hobby – I’ve blogged about it once or twice before, and in fact I link to it in my blog’s sidebar, too – so alert blog-readers will have known it is something I do.
Nevertheless, I’ve always felt oddly reticent about broadcasting this hobby too actively. It’s a “strange” hobby in many people’s minds, and many aren’t sure what to make of it. Many who hear of it percieve it to be perhaps a bit childish, or at the least unserious. It’s not a “real” hobby, neither artistic, like writing or drawing, nor technical, like coding or building databases. Yet geofiction, as a hobby, involves some of all of those skills: writing, drawing, coding and database-building.
Shortly after my cancer surgery, I discovered the website called OpenGeofiction (“OGF”). It uses open source tools related to the OpenStreetmap project to allow users to pursue their geofiction hobby in a community of similar people, and “publish” their geofictions (both maps and encyclopedic compositions) online.
Early last year, I became one of the volunteer administrators for the website. In fact, much of what you see on the “wiki” side of the OGF website is my work (including the wiki’s main page, where the current “featured article” is also mine), or at the least, my collaboration with other “power users” at the site. I guess I enjoy this work, even though my online people skills are not always great. Certainly, I have appreciated the way that some of my skills related to my last career, in database design and business systems analysis, have proven useful in the context of a hobby. It means that if I ever need to return to that former career, I now have additional skills in the areas of GIS (geographic information systems) and wiki deployment.
Given how much time I’ve been spending on this hobby, lately, I have been feeling like my silence about it on my blog was becoming inappropriate, if my blog is truly meant to reflect “who I am.”
So here is a snapshot of what I’ve been working on. It’s a small island city-state, at high latitudes in the Southern Hemisphere, with both “real-world” hispanic and fully fictional cultural elements. Its name is Tárrases, on the OGF world map here.
Here is a “zoomable and slidable” map window, linked to the area I’ve been creating, made using the leaflet tool.


There were some interesting technical challenges to get this to display correctly on my blog, involving several hours of research and coding trial and error. If anyone is interested in how to get the javascript-based leaflet map extension to work on a webpage (with either real or imaginary map links), including blogs such as typepad that don’t support it with a native plugin, I’m happy to help.
I have made a topo layer, too. I am one of only 2-3 users on the OGF website to attempt this – But the result is quite pleasing.

I have always loved maps, and since childhood, I have sometimes spent time drawing maps of imaginary places. However, I never dreamed that I’d be producing professional-quality, internet-accessible maps of imaginary places. I believe it is a kind of artform.
So that’s where my time off sometimes disappears to.
UPDATE NOTE 1, 2016-12-05: The topo view is currently broken due to some work I’m doing. It will be repaired eventually.
UPDATE NOTE 2, 2017-02-16: The topo view has been repaired.
UPDATE NOTE 3, 2019-08-15: I noticed while doing other blog maintenance that the leaflet embeds were broken. I spent a few hours fixing them – apparently some recent leaflet.js update wasn’t backward-compatible (argh).
UPDATE NOTE 4, 2021-10-13: I noticed while doing other blog maintenance that the leaflet embeds were broken (again). I spent some time fixing them (again). Using a leaflet plugin for wordpress, now. Let’s see how long that works…. 
[daily log: walking, 1.5km]

Caveat: Coder’s Fugue

This past week has been strange, as I took on – somewhat voluntarily – the challenge of "rescuing" my computer rather than just replacing it. I have a more-or-less functional Linux desktop working on my computer, but I've struggled with a basket of deplorable system configuration experiences. I'm stubborn, and I have this weird, "flow" state-of-mind that I get into as I try to solve software problems, that I don't particularly enjoy. I suppose it's why I was fairly successful in the computer and IT world, as I was last decade. But the negative aspects of the state bother me – affectively a bit "dead" feeling, and a bit too obsessive with feeling I must solve a given problem. These are the reasons I quit that career, and only a few days of returning to it, even part-time and in the context of my own computer at home, serve to remind me of why I quit.

I started having "code dreams" too, again, after their having faded away over the last few years. These are dreams that essentially consist of little more than staring at a screen and trying to solve some puzzling computer behavior. They're not nightmares, but they're plotless and vaguely kafkaesque-feeling. I call them "coder's fugue."

I had a whole string of them, all night last night. So I have decided to give this computer thing a rest, today – I mean, I still can use my computer, but I'm just trying to accept the aspects of its current set-up that annoy me, and not trying to fix them, and just getting up and doing something else if I feel frustrated by it.

[daily log: walking, 1.5km]

Caveat: an obsessive tinkerer tinkers obsessively…

At the risk of becoming boring, posting on the same essentially autobiographical topic for the third day in a row…

I continue to obsessively mess around with my computer, trying to figure out what happened to it. There is a component of my personality that is a compulsive tinkerer, and thus I somehow prefer to try to fix a clearly dying computer to buying a new one. I suppose partly I see it as an opportunity to "prove myself" and make sure I possess at least some of the skills necessary to be "self-sufficient" in the context of computers.

I made a very weird thing happen: when I gave my computer a complete "cold" shutdown (i.e. I removed the onboard battery, which forces the BIOS to reset), my USB bus returned to life! This seems quite weird and miraculous, but I can just barely grasp how this might work. If something happened that had caused my BIOS to break, which had in turn been the cause of the lost USB bus, by forcing the reset I recovered the original BIOS configuration.

Well, anyway, in theory this means my computer isn't actually broken, at the moment. But I have lost my trust in my computer – I'm working hard to make sure nothing would be lost if it should crash catastrophically. This is a useful exercise, which I don't resent.

I continue to tinker with Linux – it's interesting to me, at an almost obsessive level. I'm curious, now, to see if I can replicate ALL the functions I was performing on my home Windows machine – because my relationship with Windows was always a marriage not of love but of convenience. I had concluded 4 years ago that I could NOT replicate all those functions, but having solved the language issue yesterday, I feel optimistic that Ubuntu has progressed to the point where I maybe can do it.

There are some challenges:

  • getting my massive music collection (18000 tracks? – I didn't even know!) to be accessible and playable – every time I try to configure one of ubuntu's music players and point it to my music collection, it crashes;
  • configuring my  offline mapping tool (JOSM) that use for my geofiction hobby; this should be easy, since JOSM was originally written for Linux, but I'm running into problems;
  • replicating my "sandbox" database (postgresql) and coding environment (perl / python) – because I have a mostly dormant hobby of trying to keep my programming skills functional, in case this "teach English in Korea" gig falls apart, or if that worst-case-scenario related to my mouth health situation eventuates, and I experience a major impairment or loss of my ability to talk.

[daily log: walking, 7km]

Caveat: The Hangul Toggle

Mostly I don't post technical stuff on my blog, but I want to post this because it took a lot of googling and toubleshooting to solve the problem.

As mentioned in yesterday's post, for reasons having to do with a recent computer crash, I decided to give a try at using Linux again. I downloaded Ubuntu 16.04 and upgraded the never-used dual boot on my system, because my computer is currently USB-less, which has in turn left me mouseless. Linux offers more options for dealing with a mouseless computer, at least as a temporary stand-in until I can decide what kind of replacement or repair to do.

The reason I gave up on Linux before was because, as bad as I am at Korean, I still view having the ability to type in Korean on my home computer as an absolute necessity – a perusal of my blog will show why: I like to post my efforts at learning Korean, aphorisms, etc.

Ubuntu Linux (and other versions that I flirted with) has (had?) documented issues with keyboard internationalization. I had decided it was beyond my limited skills to deal with it. I couldn't get the "hangul toggle" to work: that is the keyboard button on Korean keyboards that lets users switch between ASCII (Roman alphabet) typing and hangul (Korean alphabet) typing.

This time, under 16.04, I gave it another try.  I did some googling to try to find if someone had found and documented a solution.

I found this page. It's in Korean, but the relevant Linux commands are there, and I could piece together the steps required to get things to work.

I followed the steps, and after a reboot (which had some frustrating, unrelated issues related to the weird way the Ubuntu-installed GRUB loader interacts with a Korean-speaking BIOS), the keyboard entry works!

After this, I also found this English-language discussion of a slightly different method, which someone can be free to try as well.

And now I can say, from this Linux window: 문재를 해결했습니다!

Ubuntu1604_hangultoggle

[daily log: walking, 7km]

Caveat: Crashed and not yet burning

I'm having kind of a horrible day.

My computer had a weird kind of crash, this morning. I think the hard drive is fine, but the ports bus seems partly burned out, so a bunch of the devices have become "invisible." Based on some rudimentary troubleshooting, I think it's a hardware problem rather than a software problem, since the problems are the same under Linux as they are under Windows.

I'm running under Linux at the moment, because it's easier to use Linux with a non-functioning mouse – which is one of the fatalities of this problem. I have no speakers, no mouse, no ability to plug in external storage – nothing USB works. I might be able to get a functioning mouse if I could find one of the old PC style mouses, but they're not sold in stores anymore – I have a quite old PC style keyboard I'm using, but the spacebar is "floppy" and the backspace is erratic. I am FTPing my most important files to my rarely-used server, because I guess I need to buy a new computer, and lacking USB ports means I have no other simple means of extracting data from my harddrive.

I went to the store earlier intending to shell out and buy a new computer, but changed my mind at the last minute because I feel such a major investment needs to be better thought out beforehand. Anyway, buying a Korean-speaking, Microsoft Windows computer is a major undertaking. Korean Windows is really the only option here – English-language Windows is possible as a pirated version, but every time I've tried to do an "official" upgrade I've been driven away by impossible-to-understand websites mediating the process – not to mention the exorbitant price MS charges for the English version in Korea. Korean Windows will mean spending a day with a dictionary in one hand and the system set-up windows on the screen, as I walk through getting it all working they way I want. Maybe I'll give a try at going to Linux full-time, again – it's been a few years since I last tried that, which might be enough time for them to have sorted out the truly annoying language-support issues that drove me away from it before.

I'll sleep on it.

[daily log: walking, 3km]

Caveat: 6502

When I was in middle school, we had an Apple ][ computer. I messed around with programming – not very seriously, but I taught myself the rudiments of the MOS 6502 Assembler ("machine language"). The 6502 is perhaps one of the most famous CPUs in the history of personal computing, since not only was it the CPU in the first Apple computers, it was also the heart of early Nintendo and Atari game systems. Even today, there are "6502 fan clubs" and "retro computing hobbyists" focused on the platform.

A2_redbookLater, in college, I took a few computer science courses (enough to make a minor), including an assembly language programming class, at that point in time based on the platform of the Motorola 68000 (the CPU in the Macintoshes of the era). I'd had some chance to experience one of that chip's ancestors when in middle school, too, because my uncle had brought home a project he was working on involving a Motorola 6800.

That assembly language class was undoubtedly the most valuable computer science class I took, because if you can make something work in assembly language, you have a genuine understanding of how computers actually work, and this provides an excellent foundation for any future programming efforts.  I have long believed that assembly programming should be a foundational course in high school curricula – for all students, at the same level as courses like math, physics or biology. 

During the time I was taking that course (1988?), I dug out an already-at-that-time antiquated Apple , and with some help from my friend Mark (who at that time was starting his career in embedded systems), I began writing a Lisp language interpreter for the Apple, using the 6502 Assembler directly, and using my battered, red-covered reference manual preserved from the middle school years (the picture at right was found using an online image search, but it looks exactly like my old copy).

I didn't really get much further than the rudiments of my interpreter – I think I got it to respond to some kind of "hello world" instruction, in the vein of

(DEFUN HELLO ()
"HELLO WORLD"
)
(HELLO)

I was unable to solve the "garbage collection" problem (memory management), and I never took the Operating Systems Design course that could have given me the understanding necessary to do that. But it was fun. Recently, remembering this activity, and aware that your average internet browser tab has much more abstract computing power than a 6502 chip, I was curious if there were 6502 emulators available.

Sure enough, there are. Many, many emulators – google is your friend, if you're interested. 

Apple_Invaders_AP2Not only are there emulators, but someone has taken the time to construct a fully-functioning "virtual 6502" which is visual – meaning you can watch the signals step through the fully mapped processor as it executes its instructions. You could run Space Invaders on the emulator (if you could find a compiled code for it), and watch the processor step through the execuation of the classic game.

Given I am a nerdly geek, this was interesting to me.

[daily log: walking, 7km]

 

Caveat: <html>37 years of markup</html>

I had this weird realization, over the weekend, as I did some little thing on my computer, that I have been hacking around with HTML for more than 20 years, now. I was first exposed to HTML in maybe 1994, when I was taking grad-level courses at the University of Minnesota in preparation for my formal application to grad school, and was messing around on the U of M's intranet, which was in its infancy but was well ahead of the technology adoption curve, since the WWW was only about 3 years old at that point – note that the U of M was one of the innovators in the WWW realm, having been the original home of "gopher," a hyperlinked, markup-driven proto-internet that was one of the coneptual predecessors to Tim Berners-Lee's creation of the WWW at CERN in 1991).

Less than 2 years later, as a grad student at the University of Pennsylvania, I "published" my first web page – several webpages, actually, a simple website that provided me with a means to communicate homework assignments and ideas with my students (I was a TA, teaching lower-level Spanish language classes). My website included a little compilation of interesting bits of Spanish language culture such as could be found online in that early period of the internet, and when I was no longer teaching, I moved the site over to a geocities site where it lasted a year or two more, but it eventually died (along with geocities, of course).

DotMatrix Regular.ttfHTML (hyper-text markup language) was not that hard for me pick up. I was already familiar with the concept of "markup," since even in 1995 I had already been dealing with some other types of markup for almost two decades.

I was exposed to the concept of "markup" in middle school in the late 1970's, thanks to my computer-literate uncle, who had an Apple II that he'd kludged together with an IBM Selectric typewriter (well, not brand-name, I think it was a Japanese clone of an IBM Selectric). This unholy marriage allowed him to produce letter-quality printer output in what was still a predominantly low-resolution, dot-matrix age (picture at right, for those too young to remember). I wrote my middle-school English essays (and later high school essays) using this arrangement. To send the unformatted text files to this printer required the use a fairly arcane set of markup commands (possibly these commands were ancestral to what later became LaTeX? I'm not sure). 

Later, as an undergraduate in 1983-1985, I had a work-study job in the department of Mathematics, and they discovered my mastery of the principles of markup and they made use of me for some departmentally published mathematics textbooks – even today, mathematical printing requires a great deal of markup to come out looking good – just look at the "source" view, sometime, on a math-intensive wikipedia page

So, as I said, markup was already an "old" concept to me when I met HTML in grad school. And HTML is a conceptually quite simple implementation of markup principles.

20 years later, I've realized that despite all my shifts in profession and location and lifestyle, not a week has gone by, probably, when I haven't hacked a bit of HTML. Of course, having this blog exposes me to opportunities – but most people with blogs avoid the markup, sticking to the user-friendly tools provided by blog-hosts. I, however, somehow manage to decide to do some HTML tweak or another with nearly every blog post. Ever since I started keeping a separate work-blog to communicate with students, I have made even greater use of my HTML hacking skills, since it allows me a convenient way to bypass the Korean-language user interface on the naver.com blog-publishing website.

So … enjoy the fruits of markup – happy web surfing.

[daily log: walking, 6km]

Caveat: Disabling Windows Update Improved My Quality of Life Substantially

I have to say this – the “Windows Update” service sucks.
Perhaps there is some technical way to control when it decides to run its various processes. I was never able to figure it out – perhaps I was somewhat hindered by the fact that the computers I was working with are Korean language versions of Windows (and it should be noted, Microsoft charges money to change the language of your operating system, which strikes me as a cruel scam), and my Korean just isn’t that good.
picture
The experience I have had with Windows 7 is that the Windows Update kicks off various memory-intensive processes basically whenever it wants. These processes, linked to what is called the “TrustedInstaller,” are not the same as the timing of the download of the updates, which can be scheduled (see screenshot, above).
These processes use so much memory (the amount of which I couldn’t seem to regulate, cap or control) that it essentially prevents one from using the computer while they are running. As I said, these processes kick off on what seems an essentially random schedule (some experimentation showed that it was often, but not always, within the first few hours of connecting the machine to the internet after having been turned off or having been disconnected). It’s a common enough problem that you can find other people complaining about it online with google searches, but most commenters seem seem to relegate it to the category of an annoyance, rather than considering it a major problem.
For me, it was a major problem. I don’t use my computer all the time at work – obviously, when I am in the classroom, I’m not using it. But when I need to do something on it, I need to do something right then. I can’t sit around and wait for some Windows Update trustedinstaller.exe doohickey to finish monopolizing the memory. I’m on a 4-minute break between classes, and I have run to the staff room because need to go on my computer to print something for a student, or I need to search for a file on my computer, etc. I would keep the “processes running” window open my computer, just so at least I could decide right away whether my computer was going to be useful to me or not.
Lowering the priority on the process thread connnected to the svchost.exe process that encloses the trustedinstaller services involved in these processes didn’t prevent them from making my computer unusable. Killing off any of the svchost processes isn’t an option, as they tend to bundle things you really need, like network connectivity, with things you really don’t need, like Window Update. The only solution is to disable the process.
So, finally, I simply disabled Windows Update. What is it doing, anyway? All these alleged virus vulnerabilities… sometimes I feel like it’s just so much technohypochondria, really. If I have a major problem on my computer, I have it all backed up – I learned my lesson long ago, about that. I would never lose more than a day or two’s worth of work. I put everything important and long-term in google-docs. Maybe I would be best off at this point with a chromebook. But I need Windows because this is Korea – Microsoft owns the Korean OS space, and so I get too much stuff from coworkers, bosses, and students, that is Windows-reliant, Internet Explorer (i.e. ActiveX) -reliant, and/or MS Office-centered. I actually have Ubuntu Linux installed on my work computer, and sometimes I open it just for a breath of fresh air, but the interoperability issues quickly end those experiments.
Anyway, disabling Windows Update prevents these memory-intensive processes, and I can use my computer when I need to, without these annoyances.
Sometimes, I leave my computer on and turn the Windows Update back on when I leave work, so it can still do its update thing if it wants.
Since I disabled Windows Update, I’ve experienced a noticeable improvement in my mood at work. I no longer dread having to run to my computer to do something for a student between classes. I no longer have to tell a student, on a nearly daily basis, “I’m sorry, I’ll get that paper to you later, I can’t open it on my computer right now.”
For anyone reading this: 1) if you’re having the same problem, just disable Windows Update – you’ll survive fine; 2) if you have some advice for how to get my (Korean-speaking) Windows 7 to only run these memory-intensive process on some fixed schedule, please let me know – I gave up trying to figure it out.
Posting this here is probably inviting some various denominations of fanboy trolling, but I guess I’ll just deal with that – comments are moderated and often ignored.
picture[daily log: walking, 6 km]

Caveat: jaredway.com

I have a fairly elaborate “professional” website, now, dedicated to my work as a teacher.

jaredway.com

I have made it a “public” blog on naver (Korean web portal) platform, now, which increases its accessibility for students and their parents, since it is within the cultural firewall that surrounds the Korean internet.

[UPDATE: This is all quite out-of-date. The website, jaredway.com, is still active but much transformed – since around 2018 it’s been my personal “identiy” site: stuff like my resume, a summary of interests, etc. But the “blog” I created for my work-related postings, in Korea, on the Korean platform, is still there! That is:  https://blog.naver.com/jaredway]

CaveatDumpTruck Logo

Back to Top