The Text WINdow manager, twin, is quite a nice piece of software if you don’t
want to have to run X, though it can run within an X term.
I have found that it works extremely well with the ‘leggie’ fonts, from
Leggie, a legible, pretty bitmap font
Here is an example of a twin session, captured using fbcat.
So this was a twin session, using the leggie18 font, on a netbook.
twin has many capabilities — this only shows you what it looks like. One cannot run framebuffer graphics inside twin windows — eg fbi, or dosbox (which uses sdl) — but I have found that a twin window on one virtual console, and then a ‘bare’ framebuffer on a second one makes for a good combination for working without X.
I have noticed that some fonts give ugly outlines on the windows (rows of diamonds or non-characters). The leggie fonts on 32-bit Debian give neat lines around the windows. YMMV.
Some nice features include a built-in clock (see bottom-right corner), and the ability to type in a window while keeping it behind others. That’s why the screen capture command is invisible in the screen grab above. Alpine, links, lynx and other text-based network tools work fine, plus all your other console tools. It is very light on resources, too. top suggests it uses about 0.5% of my memory and CPU — and I am running an old netbook with 1GB RAM and an Atom N550 chip!
Could form the basis of a Linux distro to challenge TinyCore,
Note to self
- Booted Win 10 machine.
- Got usual login screen.
- Logged in as per usual.
- Got a black screen with a working, visible mouse pointer.
- Got the menu and chose Task Manager.
- Went to start up items and disabled most items, but enabled ‘Lenovo Utility’.
- Used Ctrl-Alt-Del to reboot.
- Got the screen back.
Of course, I’ve skipped all the messing around that I really did.
I don’t know why. Other non-admin account on the machine worked fine, so something funny happened in a setting somewhere. Anyway, there you go.
The whole world of ’tilde’ accounts is to give you a little server space where you can monkey around in bash and maybe build a website the 1990s way — by editing HTML in nano, emacs or vim.
Mine is here: https://tilde.club/~mz721/, and this is how it looked as-provided by the admin:
I have set it up so that if I scp files into the right directory, they will be automatically added to the index.html.
Consider the following bash script. I log into my account and run it from ~/public_html using nohup so that it does not stop when I log out. The makelinks.sh script lives in the ~/public_html/pages directory and just creates a block of HTML with links to each of the HTML files it finds in there.
The script then creates a little file with the time and date updated information in it, then cats all the bits together, then sleeps for a day and runs again. My scripting is very crude, but seems to work.
while true do cd pages ./makelinks.sh cd .. echo \<p\> \</p\> > date echo \<p\> \</p\> >> date echo Last updated: `date` >> date cat index.top.html pages/index.middle.html date index.tail.html > index.html rm date sleep 24h done
Here is the text of makelinks.sh. It loops over all the .html files in the directory. The echo command prints out a line of the form of a basic HTML link. It grabs the second line of the html file for use as the link text (that is what the head/tail bit does) then creates the link.
rm index.middle.html for f in *.html do echo \<p\>\<a href=\"pages/$f\"\>$(head -2 $f | tail -1)\</a\>\</p\> >> index.middle.html done
The output of makelinks.sh looks something like
$ cat pages/index.middle.html <p><a href="pages/hermes10.html">The Hermes 10 electric typewriter</a></p>
So all I have to do is make sure I upload a HTML file (with its dependencies) that looks something like the one below. Top 3 lines are:
- open comment
- link text
- close comment.
After that, any legit HTML should do.
<!-- Link text - always here. --> <html><head> <title>Title</title> </head> <center> <h1>Main heading</h1> </center> Content </body></html>
So I make up my new page locally, with the correct 3-line header, then scp it to the correct folder, and the script wakes up once a day and adds the page to the index.
Crude, but effective. Obviously, I can complexify what I do and add more features, but I’ll let that evolve with time.
Here’s a random example for no good reason.
Here is the gnuplot script (surfaceplot.gp):
set iso 30 set samp 50 unset key #set title "sin(r)" set xlabel "x" font "Times:Italic,14" set ylabel "y" font "Times:Italic,14" set zlabel "z" font "Times:Italic,14" set xrange [-4:4] set yrange [-4:4] set xtics offset -0.5,-0.5 set ztics 1 unset surf set style line 1 lt 4 lw 0.5 set pm3d set term post level1 color font "Times,12" fontscale 1.0 set output "plotfile.eps" splot sin(sqrt(x**2+y**2))
Here are the commands run at the command line:
$ gnuplot surfaceplot.gp $ epspdf plotfile.eps $ xpdf plotfile.pdf $ pdftoppm.exe -r 600 plotfile.pdf > plotfile.ppm $ convert plotfile.ppm plotfile.png $ display plotfile.png $ rm plotfile.ppm
And this gives me an eps, a pdf and a png:
89K plotfile.eps 56K plotfile.pdf 990K plotfile.png
And here’s a simple script to plot sections through the surface:
$ cat cuts.gp unset key #set title "sin(r)" set xlabel "x" font "Times:Italic,18" set ylabel "z" font "Times:Italic,18" set xrange [-4:4] set yrange [-1:1] set border lw 0.25 #set style line 1 lt 4 lw 0.5 set term post level1 color font "Times,12" fontscale 1.0 set output "plotfile-cut-y=0.eps" plot sin(sqrt(x**2+0**2)) lc rgb 'black' lw 4 set output "plotfile-cut-y=1.eps" plot sin(sqrt(x**2+1**2)) lc rgb 'black' lw 4
And here is plotfile-cut-y=1
Recently I got a tilde.club account. The account comes with 2 directories — public_html and public_gopher. So I started messing around with gopher, using the gopher client and the gopherus client (and lynx works too).
One of the most useful gopher pages I found was Text News. It sucks down RSS feeds and presents them as nicely formatted plain text.
Very nice, but as an Australian I’d like to read some Australian news. At present I’m not going to make my own gopher page or anything like that, though it would be possible to use the gopher server on tilde to do that, and I might (and/or html).
Very kindly, the Text News page explains how to use the script that grabs HTML and RSS pages and formats them for plain text reading.
- Using gopher, visited Text News on gopher and then downloaded the text file that describes how it works; saved to …
$ mkdir installs/rsstotext $ mv instructions.txt installs/rsstotext/ $ cd installs/rsstotext $ cat instructions.txt
- Viewed the file and installed stuff:
$ sudo apt-get install python python-pip #python2 $ sudo pip install html2text requests readability-lxml feedparser $ git clone https://github.com/RaymiiOrg/to-text.py
- Went to find some feeds
$ links2 -g google.com
- Searched for a list of Aussie RSS news feeds; found one at:
Downloaded as ausfeeds.html
- Viewed the file and worked out how to most easily pull out the URLs of the feeds; don’t mind if it is a little bit manual.
$ grep xml ausfeeds.html | cut -d'=' -f2 > ausfeeds
- Checked usage:
$ cat instructions.txt
- Edited the resulting file, including prepending the appropriate totext.py command to each line, and commenting out the ones I don’t want just now.
$ vim ausfeeds $ cat ausfeeds #! /bin/bash echo "Consider cleaning up in /home/username/installs/rsstotext/saved!" cd /home/username/installs/rsstotext echo ABC ... python /home/username/installs/rsstotext/to-text.py/totext.py --rss -n --url http://www.abc.net.au/news/feed/2942460/rss.xml #echo SMH ... #python /home/username/installs/rsstotext/to-text.py/totext.py --rss -n --url http://feeds.smh.com.au/rssheadlines/top.xml echo Age ... python /home/username/installs/rsstotext/to-text.py/totext.py --rss -n --url http://feeds.theage.com.au/rssheadlines/top.xml echo Huffington Post Australia ... python /home/username/installs/rsstotext/to-text.py/totext.py --rss -n --url http://www.huffingtonpost.com.au/rss/index.xml echo Canberra Times ... python /home/username/installs/rsstotext/to-text.py/totext.py --rss -n --url http://www.canberratimes.com.au/rss.xml #echo WA Today ... #python /home/username/installs/rsstotext/to-text.py/totext.py --rss -n --url http://feeds.watoday.com.au/rssheadlines/top.xml #echo Brisbane Times ... #python /home/username/installs/rsstotext/to-text.py/totext.py --rss -n --url http://feeds.brisbanetimes.com.au/rssheadlines/top.xml echo Done! News stored in /home/username/installs/rsstotext/saved sleep 2s cd saved vfu
- Made it executable (later made a soft link into ~/bin):
$ chmod +x ausfeeds
- Try it…
It takes a while, but works fine. A text mode file manager is a good way to view the results; Here we use vfu; here is what it looks like.
Once the directories are populated, it’s best to not rerun the script until an updated list of stories is desired — just use vfu to browse the existing downloads.
We’ve been having a connection problem with this router — a computer on a wireless connection will often find the router but not the internet, though it worked on last login. Any wired connections can see the interwebs. So I guess that’s a ‘wireless connection problem’.
Power cycling (have you tried turning it off and on again?) the router solves the problem, but is not a very satisfactory solution.
So I thought I’d try a firmware upgrade.
Old firmware was 1.9.1088 (or something), quite old. So in a way that was good — more chance that the new firmware will solve the problem.
Went to https://support.netcommwireless.com/product/ntc-40wv and downloaded https://support.netcommwireless.com/sites/default/files/firmware/NetComm_M2M_Family_Release_FW220.127.116.11_NTC-40WV.zip. This is probably the last firmware this (now unsupported) device will receive.
Unpacked the archive.
Logged into router (http://192.168.1.1) as root.
Now, the main trick is that when upgrading from 1.x.x.x series firmware to 2.x.x.x, we must first install something called appweb-large-file-
This is within the downloaded archive, so it’s OK. Don’t go hunting around in the ‘net for it.
In the browser interface to the router, went to System → Load/Save → Settings and saved my existing router settings to a backup. (Accept the given file name!)
Then went to System → Load/Save → Upload and browsed for the ipk file, then uploaded and clicked ‘Install’. (The instructions are in a PDF that’s also in the archive; read that instead of this.)
The 2.x.x.x series firmware is a single big file, where earlier versions contained two smaller files, hence the need for the preliminary upload.
Then browsed again and chose ntc_40wv_18.104.22.168.cdi.
Uploaded and installed.
Waited about 4 or 5 minutes.
Shiny new status page and rearranged browser interface to the router appeared. Was active; I could browse the web etc, so all looked good. But — had our problems with wireless connections gone away?
<<One week later>>
Now, after a week of observing the ability of machines to connect, do we still need to reboot the router regularly?
It seems … a lot better!