2006-11-15

You must be kidding me.. Part 1

from the manpage of diskseekd (1):
Several people have noticed that Linux has a bad tendency of killing floppy drives. These failures remained completely mysterious, until somebody noticed that they were due to huge layers of dust accumulating in the floppy drives. This cannot happen under Messy Dos, because this excuse for an operating system is so unstable that it crashes roughly every 20 minutes (actually less if you are running Windows). When rebooting, the BIOS seeks the drive, and by doing this, it shakes the dust out of the drive mechanism. diskseekd simulates this effect by seeking the drive periodically. If it is called as diskseek, the drive is seeked only once.

2006-11-02

Shooting Super-16mm On A Low Budget

Excellent intro to shooting on 16mm. Covers minimum costs for a short production.

2006-11-01

The linux-gate.so & VDSO story

If you ever wondered why linux-gate.so keeps appearing on your ldd

$ ldd /bin/ls 
linux-gate.so.1 => (0xffffe000)
librt.so.1 => /lib/tls/librt.so.1 (0xb7f39000)
libacl.so.1 => /lib/libacl.so.1 (0xb7f30000)
libselinux.so.1 => /lib/libselinux.so.1 (0xb7f1b000)
libc.so.6 => /lib/tls/libc.so.6 (0xb7de3000)
libpthread.so.0 => /lib/tls/libpthread.so.0 (0xb7dd1000)
/lib/ld-linux.so.2 (0xb7f55000)
libattr.so.1 => /lib/libattr.so.1 (0xb7dcc000)
libdl.so.2 => /lib/tls/libdl.so.2 (0xb7dc8000)
libsepol.so.1 => /lib/libsepol.so.1 (0xb7d87000)

then this blogpost is for you. It also says a few things about Virtual Dynamic Shared Objects (vdso's) and the sysenter instruction found on modern x86 cpu's.

2006-10-31

userspace filesystems

Once a year I do a "filemanager" poll: I check on the status of two-pane file managers, looking for something that resembles Total Commander, only in Unix.

This year I ran into gnome-commander while looking for a file-manager that would support libgnomevfs (with its exotic sshfs, smbfs and so on).

It strikes me odd, the fact that there is so much code duplication between user-space filesystem projects. There are three levels of userspace filesystem implementations on Linux software right now:
  • Virtual filesystems implemented within the file-manager (such as a virtual filesystem for tar.gz contents)
  • Userspace filesystems exported via libraries (and object broker services), to applications (such as gnomevfs sshfs module, or samba kio-slave)
  • Kernel-assisted userspace filesystems, accessible by all applications, but requiring manual setup (see FUSE-related projects)
Is there a way to unite these projects so that people don't have to write the same code over and over again? Shouldn't a newly mounted userspace filesystem, be available to all applications and not just the one that generated it?

More on this later.

2006-10-29

Eternity and a Day



This is Theo Angelopoulos' view on the crossing to the Other side. The frame says it all.

2006-10-24

for Badi

I love Badi Assad's music.

It is sincere, sweet and meticulously played.

Badi herself is a wonderful person. I had the chance of enjoying
one of her shows here in Athens last year. I hope she
visits us again soon..

This is her home on the web.

Her album Verde has a version of Bjork's Bachelorette
that still haunts me..

"if you forget my name..you will go astray..."

2006-10-23

Rebuilding freetype on Debian

My rather slow desktop (450mhz) is becoming more and more sluggish.

A friend insisted that I should try compiling [1] libfreetype for my machine. Debian uses "-g -O2" for the official libfreetype6 package, which I turned to "-O3 -march=k6-2" in debian/rules.

apt-get build-dep libfreetype6
apt-get source libfreetype6
cd freetype-2.2.1
vi debian/rules
dpkg-buildpackage -b -nc -rfakeroot
dpkg -i ../libfreetype*.deb


The optimisation flags helped a lot. Firefox seems a bit more lightweight now, gimp does font-kerning in 'no time' and gnome-commander displays directory entries much much faster.

I should note here that according to the info pages of gcc [2] -mmmx and -m3dnow do not add any optimisations to the source code. They simply enable the corresponding cpu operations in the form of builtin functions, such as:

v8qi __builtin_ia32_paddb (v8qi, v8qi)


kudos to vvas
[1] apt-build automates package building/upgrading on Debian
[2] pinfo is a nice browser for info pages btw..

2006-10-15

Catharsis (flushgmail)

When I got my gmail account I read the ad saying "this is a fantastic spot to subscribe to all your favourite mailing lists and never delete any email EVER". Saw I said, "ok, let's do that". I started subscribing to a few mailing lists (with lots and lots of traffic) and what do you know? 22k email messages on the first month. So I said "my bad, let's delete all these messages and I'll just read my news from tin as usual". But I found no way of doing this.

Months and years passed and not a single time did I use this account. Why? Because it was muffled with all these garbage messages. "Time to change this", Yoda said. So I looked at fetchmail's manpage to see if I could get rid of all messages in a POP3 mailbox (yes gmail now gives you POP3-SSL access). Guess what? You can't do this without retrieving part of the messages. I resorted to something like this:

fetchmail -v -a --ssl -p POP3 -u account@gmail.com pop.gmail.com --bsmtp /dev/null


"Oh, such inefficiency!" cried the young padowan and ran towards the masters.

I say, since we're still on Catharsis weekend, let's do a proper flush for gmail :)


flushgmail.pl
---------------------------------------------------------------
#!/usr/bin/perl -w

use Term::ReadPassword;
use Mail::Transport::POP3;

my $gmail_server = "pop.gmail.com:995";
my $stunnel_pid_file = "/tmp/flushgmail_stunnel4.pid";

die "usage: flushgmail.pl <localport>\n" unless(@ARGV == 1);

my $localport = shift @ARGV;

print "Enter gmail username: ";
my $username = <>;
die "error: no username provided!\n" unless defined $username;
chomp($username);

my $password = read_password(' gmail password: ');
die "error: no password provided!\n" unless defined $password;

sub report {
my $msg = shift;
print "flushgmail: $msg\n";
}

sub stunnel_shutdown {
my $msg = shift;
report($msg);
if (-r $stunnel_pid_file){
kill('TERM',`cat $stunnel_pid_file`);
}
}

sub sig_int_catcher {
stunnel_shutdown("stunnel emergency shutdown");
die "flushgmail: caught SIG_INT!\n";
}

$SIG{INT} = \&sig_int_catcher;

stunnel_shutdown("destroying old stunnel instances");

report("spawning new stunnel");
open(STUNNEL, "|stunnel4 -fd 0");
print STUNNEL << "STUNNEL_END_CONFIG";
pid = $stunnel_pid_file
[gmail]
accept = $localport
client = yes
connect = $gmail_server
STUNNEL_END_CONFIG

close(STUNNEL);

my $total_deleted_msgs = 0;
my $run = 1;
while(1){
my $pop3_conn = Mail::Transport::POP3->new(
port => $localport,
hostname => 'localhost',
username => $username,
password => $password);

unless (defined($pop3_conn)){
stunnel_shutdown("emergency stunnel shutdown");
die "error: could not connect to stunnel\n";
}

my $msg_cnt = $pop3_conn->messages;
last if ($msg_cnt == 0);
report("run #$run found $msg_cnt messages");

my @ids = $pop3_conn->ids;
$pop3_conn->deleted(TRUE, @ids);
$pop3_conn->disconnect;
$total_deleted_msgs += $msg_cnt;
++$run;
}

stunnel_shutdown("shutting down stunnel");
report("deleted $total_deleted_msgs messages");


To use the script you'll need Perl modules Mail::Box and Term::ReadPassword, as well as stunnel4 (cause Mail::Box doesn't support SSL natively yet).

If you're wondering why there's a loop around the POP3 connection.. I noticed that each time I connected to gmail, the pop3 server provided me with just a subset of my mailbox (ranging from 500 to 1200 messages) instead of the whole thing. Maybe this is the way gmail organises their mbox'es, in volumes of a certain size..

Catharsis

I'm going through some kind of catharsis weekend.

On Friday night, I watched "Empire of the Senses" a film about sexual obsession, at a film festival in Athens. This was followed by a Pilali/Poulikakos show (a blend of greek rock and rempetiko-blues) at RODEO club. The place had poor air conditioning, but the show was excellent. You don't get to see people dancing on tables in rock concerts and this was a first ("Τη μπετονιέρα μη τη κατηγοράς, αυτή σου δίνει για να φάς"). We took a cab to Psyrri square, to find our friends at Aspro. The groove penetrates your head cleansing the day's worries.

Woke up late on Saturday. Had to review the case of a defaced site, where the owner sued a big corp. Had a coffee with a colleague and went through the facts of the case. Admins are poorly paid in this country and are usually pressed to wear more than one hats (computer support technician, technical advisor, security officer etc.). Companies don't take administration seriously and security is something of an added-value service (usually provided by 3rd-parties). I don't want to
sound mean, but at some point there should be some sort of regulation for poorly maintained internet hosts.

Saturday evening found me at Theseum Theatre. Watched Superlux with a friend. Fantastic show. From what I hear, they'll be touring Europe soon :)

There was something bogging me all afternoon. I had to pop the Question.

[Mind you, the universe has no problem with you coming up with Questions, just as long as you don't require an Answer. If you _are_ looking for one, it will conspire in all its quantum glory and provide you with the shortest one.]

The Answer was "I don't know".

A glass of Drambuie, a couple of friends, Tapas coldcuts and a manic taxi driver made sure there were no more Questions before bedtime.

Today (Sunday) is election day. Met an old friend from school at the voting center (he still reminds me of Cartman :). Came back and continued working on my gmail flusher. Hmmm, haven't told you anything about it, have I? Check the next blog entry..

I'm off to pick up a friend from Larissis Station. The weekend is not over yet.

2006-06-28

BLOG WARS


BLOG WARS

It is a time of civil war.
The Rebel fleet leaves planet LiveJournal and
seeks refuge on the Blogspot system.

Seven Articles have been left behind.

The future is uncertain.

Will there exist another post?

2006-06-17

Privacy

(recovered from livejournal)

I'm becoming more and more concerned about privacy issues on the web, especially the so called "user-tracking" problem. Let's say that provider A is controlling the web sites A1, A2 and A3. User Stella is browsing through the web, hopping thru websites Z -> A1 -> D -> E -> A2 -> A3. Provider A can track a good proportion of Stella's web-browsing activities, by examining the logfiles of the webservers hosting the sites A1, A2, A3 (plus any info coming from referer sites or followed-links). The logfiles contain (among other things) browser identification info (OS, version etc.), the actual page requested (along with any URL-submitted parameters), and the IP address of the public host that seemed to have requested the page in question.

I'm not really interested in what the provider does with this data (user-modelling, data mining etc.). What I'm concerned about is the possibility that a single entity could control such a number of websites that it would be feasible for this entity to reconstruct part of a (random) user's web-surfing route.

Of course the number of sites that a provider would need to control seems at first look daunting. If you even consider that the sites must be of varying subjects, so as not to follow just the people that have clustered around a specific domain of interest, this makes the user-tracking task look even more daunting.

But, let's imagine the following scenario:

A company offers the following services:
a) Search Engine
b) Ads (or other site-embedable objects like website usage statistics, maps etc.)
c) Blog-space
d) Webmail
e) Social Networking
....
If you're beginning to see the pattern here then maybe it's time to pass some legislation regarding the effective web presence (or omnipresence for this matter) of entities with common interests...

..or maybe it's high time we start looking for anonymizing proxies :P

Current Mood: worried
Current Music: Husnu Senlendirici - Husnu Klarnet

2006-04-26

Is internet TV here already?

(recovered from livejournal)

First, a message to our sponsors:
If you are a person working for (or in charge of) a greek TV station I'd like you to know that whatever you're doing is wrong and you should stop doing it. Thank you.

I really can't say that I'm not a TV-buff. I've spent hours and hours in front of the telly during the '80s and I miss all the thrills I used to get from cliffhangers on matlock, dr who etc. The shows were good, the reporters seemed to know a bit about their job and you could actually feel the people struggling to get a quality show on the air.

I won't go into the status of current Greek (or international) TV. It's horrible. And I feel that they've actually lost even the passive viewers (what is an active viewer?) they used to have: no more getting back from work and unwinding on the telly.

This reminds me of the time I was a student abroad and didn't have a TV set. I didn't really miss the shows. I would usually watch a film, or an episode of my favorite cartoon series (all available on the web - remember UGO cartoons?) on my PC and it felt the same. In fact, no commercials, more pop corn and even more friends coming over :-)

Appart from a few shows like Heavy's "Behind the music that sucks" or Slashdot Radio (featuring geeks on microphones) there wasn't much of a continuity on internet broadcasted media. Most shows were of the ad-hoc type (few friends having laughs in front of the camera - remember TheBroken?) and you never knew if there was going to be any episode next week.

But the times they are a-changin:
I recently got broadband internet at home (384kbps - don't laugh) and during a web search for geek-related media I stumbled upon a few shows that seem to be changing the scene. There is definitely some backing from companies who are interested in presenting their products, but also ISP's and others who merely wish to advertise thru this new (?) medium. So, instead of geeks with a camera, running around trashing laptops in their backyard, you have studios, regular interviews, wmv broadcasting, podcasting, multiple downloadable formats (h264 anyone?), RSS feed with information on the shows and much more..

Here are a few links to wet your appetite:

Too geek-y? Well, they were the first results on google.

Archive.org has an online archive of freely available video's, but.. it's really not the same. They do archive the video's but you don't get the sense of continuity you'd like when watching TV programs.

Now, all there needs to be done, is to setup Distributor companies, who scout for such shows, *support* by some means (money, bandwidth) and create user profiles so as to allow users who are interested in one type of show to find out about other shows related. This is not the same as the podcast directory, since we're talking about organisation of the digital media on live as well as static formats (DRM is not my friend) and in "channels" according to the likes of the user.

I'd love to see BBC taking a chance on something like this..

Current Mood: refreshed
Current Music: Bebo & Cigala - Lagrimas Negras