Planet freenode

November 21, 2014

RichiH's blog

Release Critical Bug report for Week 47

There's a BSP this weekend. If you're interested in remote participation, please join #debian-muc on

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1213 (Including 210 bugs affecting key packages)
    • Affecting Jessie: 342 (key packages: 152) That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 260 (key packages: 119) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 37 bugs are tagged 'patch'. (key packages: 20) Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 12 bugs are marked as done, but still affect unstable. (key packages: 3) This can happen due to missing builds on some architectures, for example. Help investigate!
        • 211 bugs are neither tagged patch, nor marked done. (key packages: 96) Help make a first step towards resolution!
      • Affecting Jessie only: 82 (key packages: 33) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 65 bugs are in packages that are unblocked by the release team. (key packages: 26)
        • 17 bugs are in packages that are not unblocked. (key packages: 7)

How do we compare to the Squeeze release cycle?

Week Squeeze Wheezy Jessie
43 284 (213+71) 468 (332+136) 319 (240+79)
44 261 (201+60) 408 (265+143) 274 (224+50)
45 261 (205+56) 425 (291+134) 295 (229+66)
46 271 (200+71) 401 (258+143) 427 (313+114)
47 283 (209+74) 366 (221+145) 342 (260+82)
48 256 (177+79) 378 (230+148)
49 256 (180+76) 360 (216+155)
50 204 (148+56) 339 (195+144)
51 178 (124+54) 323 (190+133)
52 115 (78+37) 289 (190+99)
1 93 (60+33) 287 (171+116)
2 82 (46+36) 271 (162+109)
3 25 (15+10) 249 (165+84)
4 14 (8+6) 244 (176+68)
5 2 (0+2) 224 (132+92)
6 release! 212 (129+83)
7 release+1 194 (128+66)
8 release+2 206 (144+62)
9 release+3 174 (105+69)
10 release+4 120 (72+48)
11 release+5 115 (74+41)
12 release+6 93 (47+46)
13 release+7 50 (24+26)
14 release+8 51 (32+19)
15 release+9 39 (32+7)
16 release+10 20 (12+8)
17 release+11 24 (19+5)
18 release+12 2 (2+0)

Graphical overview of bug stats thanks to azhag:

by Richard 'RichiH' Hartmann at November 21, 2014 08:31 PM

November 14, 2014

RichiH's blog

Release Critical Bug report for Week 46

I know I promised better stats, but meh... Next week :(

As you can see, there's been a bit of a mass-filing going on. and that pushed ys above Wheezy's count for week 46.

My own personal favourite bug is, of course, this one.

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1263 (Including 218 bugs affecting key packages)
    • Affecting Jessie: 427 (key packages: 175) That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 313 (key packages: 131) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 33 bugs are tagged 'patch'. (key packages: 15) Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 12 bugs are marked as done, but still affect unstable. (key packages: 6) This can happen due to missing builds on some architectures, for example. Help investigate!
        • 268 bugs are neither tagged patch, nor marked done. (key packages: 110) Help make a first step towards resolution!
      • Affecting Jessie only: 114 (key packages: 44) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 82 bugs are in packages that are unblocked by the release team. (key packages: 32)
        • 32 bugs are in packages that are not unblocked. (key packages: 12)

How do we compare to the Squeeze release cycle?

Week Squeeze Wheezy Diff
43 284 (213+71) 468 (332+136) +184 (+119/+65)
44 261 (201+60) 408 (265+143) +147 (+64/+83)
45 261 (205+56) 425 (291+134) +164 (+86/+78)
46 271 (200+71) 401 (258+143) +130 (+58/+72)
47 283 (209+74) 366 (221+145) +83 (+12/+71)
48 256 (177+79) 378 (230+148) +122 (+53/+69)
49 256 (180+76) 360 (216+155) +104 (+36/+79)
50 204 (148+56) 339 (195+144) +135 (+47/+90)
51 178 (124+54) 323 (190+133) +145 (+66/+79)
52 115 (78+37) 289 (190+99) +174 (+112/+62)
1 93 (60+33) 287 (171+116) +194 (+111/+83)
2 82 (46+36) 271 (162+109) +189 (+116/+73)
3 25 (15+10) 249 (165+84) +224 (+150/+74)
4 14 (8+6) 244 (176+68) +230 (+168/+62)
5 2 (0+2) 224 (132+92) +222 (+132/+90)
6 release! 212 (129+83) +212 (+129/+83)
7 release+1 194 (128+66) +194 (+128/+66)
8 release+2 206 (144+62) +206 (+144/+62)
9 release+3 174 (105+69) +174 (+105/+69)
10 release+4 120 (72+48) +120 (+72/+48)
11 release+5 115 (74+41) +115 (+74/+41)
12 release+6 93 (47+46) +93 (+47/+46)
13 release+7 50 (24+26) +50 (+24/+26)
14 release+8 51 (32+19) +51 (+32/+19)
15 release+9 39 (32+7) +39 (+32/+7)
16 release+10 20 (12+8) +20 (+12/+8)
17 release+11 24 (19+5) +24 (+19/+5)
18 release+12 2 (2+0) +2 (+2/+0)

Graphical overview of bug stats thanks to azhag:

by Richard 'RichiH' Hartmann at November 14, 2014 04:34 PM

November 11, 2014

RichiH's blog

One pot noodles

I had prepared a long and somewhat emotional blog post called "On unintended consequences" to write a rather sad bit of news off of my heart. While I believe the points raised were logical, courteous, and overall positive, I decided to do something different and replace sad things with happy things.

So anyway, for 3-4 people you will need:

  • The largest, widest cooking pot you can find (you want surface to let more water evaporate)
  • 500g noodles, preferably Bavette
  • 300g cherry tomatoes
  • ~150g sundried tomatoes
  • ~150g grilled peppers
  • a handful of olives
  • two medium-sized red onions
  • as much garlic as is socially acceptable in your group
  • one or two handful of fresh basil leaves
  • large gulp of olive oil
  • ~100g fresh-ground Parmesan
  • salt, to taste
  • random source of capsaicin, to taste
  • water

Proceed to the cooky part of the evening:

  • Slice and cut all vegetables into sizes of your preference; personally, I like to stay on the chunky side, but do whatever you feel like.
  • Pour the olive oil into the pot; optionally add oil from your sundried tomatoes and/or grilled peppers in case those came in oil.
  • Put the pot onto high heat and toss the chopped vegetables in as soon as it starts heating up.
  • Stir for maybe a minute, then add a bit of water.
  • Toss in the noodles and add just enough water to cover everything.
  • Now is a good time to add salt and capsaicin, to taste.
  • Cook everything down on medium to high heat while stirring and scraping the bottom of the pot so nothing burns. You want to get as much water out of the mix as possible.
  • Towards the end, maybe a minute before the noodles are al dente, wash the basil leaves and rip them into small pieces.
  • Turn off the heat, add all basil and cheese, stir a few times, and serve.

If you don't have any of those ingredients on hand and/or want to add something else: Just do so. This is not an exact science and it will taste wonderful any way you make it.

by Richard 'RichiH' Hartmann at November 11, 2014 08:57 PM

freenode staffblog

Helping GNOME defend its trademark

The GNOME project will be familiar to the vast majority of our users, what you might not be aware of is that the project is currently facing an expensive trademark battle against Groupon with the latter having allegedly chosen to infringe upon GNOME’s trademark by launching a product with the same name (a POS “operating system for merchants to run their entire operation”).

I am not going to go into the details here, as they have been explained by the GNOME project over at and the GNOME folk are in a much better position than me to provide more detailed information on the matter.

What I am going to do is appeal for your help. The GNOME project is looking to raise $80,000 to cover the legal costs involved in defending their trademark. At the time of writing this post the freenode network has 89,998 connected users. Users who are passionate about FOSS.

If each of us donated just ONE DOLLAR to the GNOME project they would cover the anticipated legal costs AND have some spare change leftover for a pint when the proceedings conclude.

Even if you do not use GNOME, please consider helping them out. This is bigger than just GNOME and I think would be fantastic if the FOSS communities could drum together to support our own.

If you head over to you can make a donation directly via PayPal by clicking on the “Help us by donating today” button.

Update: Due to the controversial nature of PayPal, GNOME is now also offering other ways to donate .

Thank you!

Update #2: According to the Groupon blog and this article over at Engadget Groupon has issued the following statement: “Groupon is a strong and consistent supporter of the open source community, and our developers are active contributors to a number of open source projects. We’ve been communicating with the Foundation for months to try to come to a mutually satisfactory resolution, including alternative branding options, and we’re happy to continue those conversations. Our relationship with the open source community is more important to us than a product name. And if we can’t come up with a mutually acceptable solution, we’ll be glad to look for another name.”

I am assuming that this means that the trademarks filed will be retracted and that the GNOME project can go about business as usual. I am certain they will be releasing a statement with further details before long.

by christel at November 11, 2014 06:57 PM

November 09, 2014

freenode staffblog

Atheme 7.2 and freenode


We’ve begun some testing on Atheme’s latest release, 7.2, and we’d like to invite interested users to help with that.

Not all changes the Atheme project has included in their new release will be included in our Atheme upgrade, so here’s the bulk of the changes that will actually affect our network:

  • /msg NickServ DROP will require confirmations from the user similar
    to the ChanServ variant. This is to prevent people DROPping when they
    should be GHOSTing or similar.
  •  We’ve loaded two exttargets:
    • $registered to grant flags to all people who are identified to
    • $chanacs to grant flags to people who have flags in another
      channel. Please read /msg ChanServ HELP FLAGS for details on how they work.
  • The SASL mechanism DH-BLOWFISH has been removed. People using it
    can connect via SSL and use PLAIN or upgrade to ECDSA-NIST256P-CHALLENGE.
    Details of how to do so are here and our SASL page will be updated with the relevant documentation soonish.

You should be able to connect to testnet at Port 9002 for cleartext, and 9003 for SSL. Bear in mind, the database is a couple weeks old, so changes you’ve recently made on the production network may not be mirrored on the testnet network. Various amounts of staff should be idling in #freenode on testnet at all times, please feel free to poke us with any questions.



by tomaw at November 09, 2014 12:56 AM

November 07, 2014

RichiH's blog

Release Critical Bug report for Week 45


Please note that Lucas hacked a "key packages" count into this list. If you have spare cycles, look at those first.

I hope to have a (somewhat) random bug of the week thingie by next week which picks stalled bugs for increased exposure.

As you can see, we are a bit worse than in the Squeeze cycle, but way ahead of Wheezy. Stats with proper diffs will also start next week.

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1154 (Including 190 bugs affecting key packages)
    • Affecting Jessie: 295 (key packages: 150) That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 229 (key packages: 116) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 22 bugs are tagged 'patch'. (key packages: 12) Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 14 bugs are marked as done, but still affect unstable. (key packages: 8) This can happen due to missing builds on some architectures, for example. Help investigate!
        • 193 bugs are neither tagged patch, nor marked done. (key packages: 96) Help make a first step towards resolution!
      • Affecting Jessie only: 66 (key packages: 34) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 37 bugs are in packages that are unblocked by the release team. (key packages: 24)
        • 29 bugs are in packages that are not unblocked. (key packages: 10)

How do we compare to the Squeeze release cycle?

Week Squeeze Wheezy Diff
43 284 (213+71) 468 (332+136) +184 (+119/+65)
44 261 (201+60) 408 (265+143) +147 (+64/+83)
45 261 (205+56) 425 (291+134) +164 (+86/+78)
46 271 (200+71) 401 (258+143) +130 (+58/+72)
47 283 (209+74) 366 (221+145) +83 (+12/+71)
48 256 (177+79) 378 (230+148) +122 (+53/+69)
49 256 (180+76) 360 (216+155) +104 (+36/+79)
50 204 (148+56) 339 (195+144) +135 (+47/+90)
51 178 (124+54) 323 (190+133) +145 (+66/+79)
52 115 (78+37) 289 (190+99) +174 (+112/+62)
1 93 (60+33) 287 (171+116) +194 (+111/+83)
2 82 (46+36) 271 (162+109) +189 (+116/+73)
3 25 (15+10) 249 (165+84) +224 (+150/+74)
4 14 (8+6) 244 (176+68) +230 (+168/+62)
5 2 (0+2) 224 (132+92) +222 (+132/+90)
6 release! 212 (129+83) +212 (+129/+83)
7 release+1 194 (128+66) +194 (+128/+66)
8 release+2 206 (144+62) +206 (+144/+62)
9 release+3 174 (105+69) +174 (+105/+69)
10 release+4 120 (72+48) +120 (+72/+48)
11 release+5 115 (74+41) +115 (+74/+41)
12 release+6 93 (47+46) +93 (+47/+46)
13 release+7 50 (24+26) +50 (+24/+26)
14 release+8 51 (32+19) +51 (+32/+19)
15 release+9 39 (32+7) +39 (+32/+7)
16 release+10 20 (12+8) +20 (+12/+8)
17 release+11 24 (19+5) +24 (+19/+5)
18 release+12 2 (2+0) +2 (+2/+0)

Graphical overview of bug stats thanks to azhag:

by Richard 'RichiH' Hartmann at November 07, 2014 05:28 PM

November 04, 2014

Md's blog

My position on the "init system coupling" General Resolution

I first want to clarify for the people not intimately involved with Debian that the GR-2014-003 vote is not about choosing the default init system or deciding if sysvinit should still be supported: its outcome will not stop systemd from being Debian's default init system and will not prevent any interested developers from supporting sysvinit.

Some non-developers have recently threatened of "forking Debian" if this GR will not pass, apparently without understanding well the concept: Debian welcomes forks and I think that having more users working on free software would be great no matter which init system they favour.

The goal of Ian Jackson's proposal is to force the maintainers who want to use the superior features of systemd in their packages to spend their time on making them work with sysvinit as well. This is antisocial and also hard to reconcile it with the Debian Constitution, which states:

2.1.1 Nothing in this constitution imposes an obligation on anyone to do work for the Project. A person who does not want to do a task which has been delegated or assigned to them does not need to do it. [...]

As it has been patiently explained by many other people, this proposal is unrealistic: if the maintainers of some packages were not interested in working on support for sysvinit and nobody else submitted patches then we would probably still have to release them as is even if formally declared unsuitable for a release. On the other hand, if somebody is interested in working on sysvinit support then there is no need for a GR forcing them to do it.

The most elegant outcome of this GR would be a victory of choice 4 ("please do not waste everybody's time with pointless general resolutions"), but Ian Jackson has been clear enough in explaining how he sees the future of this debate:

If my GR passes we will only have to have this conversation if those who are outvoted do not respect the project's collective decision.

If my GR fails I expect a series of bitter rearguard battles over individual systemd dependencies.

There are no significant practical differences between choices 2 "support alternative init systems as much as possible" and 3 "packages may require specific init systems if maintainers decide", but choice 3 is more explicit in supporting the technical decisions of maintainers and upstream developers.

This is why I think that we need a stronger outcome to prevent discussing this over and over, no matter how each one of us feels about working personally on sysvinit support in the future. I will vote for choices 3, 2, 4, 1.

November 04, 2014 06:36 PM

October 31, 2014

RichiH's blog

Release Critical Bug report for Week 44

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1168
    • Affecting Jessie: 274 That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 224 Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 30 bugs are tagged 'patch'. Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 12 bugs are marked as done, but still affect unstable. This can happen due to missing builds on some architectures, for example. Help investigate!
        • 182 bugs are neither tagged patch, nor marked done. Help make a first step towards resolution!
      • Affecting Jessie only: 50 Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 2 bugs are in packages that are unblocked by the release team.
        • 48 bugs are in packages that are not unblocked.

Graphical overview of bug stats thanks to azhag:

by Richard 'RichiH' Hartmann at October 31, 2014 11:37 PM

October 29, 2014

freenode staffblog

User-enabled sendpass

As a network, we feel it is hugely important to maintain close relationships with our many communities and users. Our interactions with users in #freenode and elsewhere on the network, fielding support requests and assisting users, help build and maintain these relationships.

But we’re constantly looking for things to change and make better, and one of the pieces of feedback we’ve had is that users would like a little automation – and the ability to be able to resolve some of their own support requests.

We recognise that allowing users to generate their own password reset e-mails brings us in line with other registration systems online and may provide a higher quality of service.

So for now, if you are having difficulties accessing your account, you can generate your own password reset e-mail using the following command:

/msg NickServ SENDPASS <account>

This command will only work with an offline account (i.e. it won’t work if a client is logged into your account via NickServ), and should obviously only be used on an account that you believe is yours.

We will be keeping an eye on how this feature is used, and may retain it permanently if it proves to be helpful and non-harmful!

by njan at October 29, 2014 09:39 PM

October 24, 2014

RichiH's blog

Release Critical Bug report for Week 43

Just a friendly reminder: If your package is not in unstable (and reasonably bug free) by Sunday, it's not in Jessie.

I am not doing full stats as I am unsure about the diff format at the moment, but in week 43, we had 284 bugs for Squeeze and 468 for Wheezy.

(282 + 468) / 2 = 376; so we are a bit better off than on average. Still, here's to hoping this freeze will be shorter.

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1193
    • Affecting Jessie: 319 That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 240 Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 20 bugs are tagged 'patch'. Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 22 bugs are marked as done, but still affect unstable. This can happen due to missing builds on some architectures, for example. Help investigate!
        • 198 bugs are neither tagged patch, nor marked done. Help make a first step towards resolution!
      • Affecting Jessie only: 79 Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 0 bugs are in packages that are unblocked by the release team.
        • 79 bugs are in packages that are not unblocked.

by Richard &#x27;RichiH&#x27; Hartmann at October 24, 2014 07:52 PM

October 19, 2014

erry's blog

Supporting an openhatch session

About a week ago, I heard that the socialcoding4good community manager Emma Irwin was running an OpenHatch session at the University of Victoria (in Canada!) OpenHatch sessions are meant to help students get started in contributing to Open Source software. Emma was also going to show them how to contribute to one of my favourite projects, Webmaker, and since I like the project and I have a lot of fun helping new contributors get started to it in general, I did the only reasonable thing and decided to help out… Remotely!

The session

Apart from helping students fix webmaker bugs, the other thing that was particularly interesting to me in the session schedule was a briefing on using IRC. Since I'm running a Mozfest session about the subject soon (!), I thought it would be great practice, so I decided to speak about IRC myself

Working from home

Technology made giving the talk remotely remarkably easy, and there weren't many surprises!
I started by helping out over Skype: Emma had hooked her screen up in a projector and I could talk and share my screen with Skype's screen sharing feature. I said a few short words about IRC and then jumped straight into demonstrating how to use a client and get on freenode's #openhatch and moznet's #introduction and #webmaker.

Later on, students were encouraged to ask any Webmaker questions in the #webmaker channel were me and other contributors could help them out. I answered some of their questions and worked with them and had lots of fun.

Overall, it was an awesome day/night both for me and the students, and I certainly hope to see them in #webmaker again contributing and having more questions!

by Errietta Kostala at October 19, 2014 01:07 PM

October 15, 2014

freenode staffblog

Server Issues: Update

Following up on our previous blog post, we have continued to investigate the compromise of freenode infrastructure, aided by our sponsors in addition to experts in the field.

NCC Group’s Cyber Defence Operations team kindly provided pro bono digital forensic and reverse engineering services to assist our infrastructure team and have recently published a report with some of their findings:

NCC’s support has been invaluable in aiding us in further securing our infrastructure, and we have already made significant changes to ensure that it is more resilient against further attacks. Our investigation into the compromise is ongoing and we will provide further updates as appropriate.

In the mean time, if you haven’t updated your password, we would advise you do so as some traffic may have been sniffed. Simply “/msg nickserv set password newpasshere” and don’t forget to update your client’s saved password.

Whilst we endeavour to provide a robust service, it is worth bearing in mind that no computer system is ever perfectly secure and many are inevitably breached. For this reason we do not suggest relying entirely on freenode (or any infrastructure) to protect sensitive data, and encourage our users to take further steps (e.g. unique passwords per service, encryption) as part of a defence in depth strategy to safeguard it.

We are extremely grateful to NCC in addition to our many other sponsors for their assistance and continued support. Without the ongoing support of our generous sponsors and wonderful infrastructure team, freenode would quite literally not have a network!

We will be continuing to work with our sponsors in addition to other relevant authorities regarding this breach and any further incidents.

by Pricey at October 15, 2014 09:27 PM

October 14, 2014

Md's blog

The Italian peering ecosystem

I published the slides of my talk "An introduction to peering in Italy - Interconnections among the Italian networks" that I presented today at the MIX-IT (the Milano internet exchange) technical meeting.

October 14, 2014 04:34 PM

October 05, 2014

erry's blog

Why I love contributing to open source software

I’m a generally quiet person, but if you ask me about open source projects, I’ll go on about them forever (I even had someone interview me about it). So, I thought I should finally get all of my honest thoughts down on my own blog as well!

freenode (or the biggest reason why I got into open source)

freenode is an IRC network dedicated to open source and peer-directed project development. It enables open source project developers to get together and discuss their work, and also provide support to their users.

I started supporting users in #freenode because I was bored in summer 2011 (literally because I was bored), then I realised I quite enjoyed it, so I just kept doing it (to the expense of my high school studies… it will be a cold day down below before I mention high school on this blog again.) Anyway, I was eventually offered to become staff, which I accepted, and I’ve been staff ever since. I’m not sure why anyone would enjoy to spend countless hours just supporting freenode users with any questions they have without expecting anything in return, but I loved it then and I love it now! I think freenode is awesome, because it helps bring many open source projects, companies and non-profits together with their users and assist in collaboration.

My role as a staff member involves helping representatives of on-topic projects manage their community on freenode, and helping other users with finding their way around and using the network.

I’m also involved with working on developing their group management system, which when deployed will help make project affiliation with freenode a lot easier. Groups will just be able to give us some information and perform verification on a website which will track requests, rather than having to manually do this with a staff member. They will also be able to take over channels for their users and perform other currently manual tasks through the portal.

Working on it helped me a lot, because I earned many of the skills I later used for my studies in University and my actual job. That’s another thing about me, I think working on open source projects ultimately helps me as much as it helps the project, at least most of the time!

Mozilla, webmaker, etc.

I also contribute to Mozilla’s projects. I fixed a few bugs for Firefox and Firefox for Mobile at first, but then I discovered webmaker, mostly thanks to social coding for good which pointed me to that project. Webmaker was easier and nicer for me to contribute to, because it used technologies I like and use, so it “stuck”. I also love Webmaker because of its goal – to provide web literacy all over the world! I think it’s extremely important because there are so many Internet users, and many of them would have their lives greatly improve if they could use the web to make ideas from their imagination come true and to express themselves. As with past open source projects, it helped me learn more about angularjs shortly before I started an angular project at work, so it was also helpful to me!

As a code contributor for webmaker, I look for bugs or feature requests filed by Mozilla employees and other contributors and resolve some of them.

I also have the unique opportunity to attend weekly demos, seeing what everyone else in the project has been up to, and even presenting my own work! This is really awesome for me because I get the opportunity to see amazing technologies in use and learn about how they were used.

Eventually I even started reviewing bugs for other contributors and Mozilla employees and mentoring bugs for new contributors, helping people get started on the project, which is also extremely rewarding and I’m really glad I get to do!

In general, I really love open source software. I think they always help people one way or another – after all, just by publishing your code so that everyone can see it you enable them to get ideas and learn (and in return you may get contributors from people who want to see their enhancements in your project!). I like contributing to certain open source projects, because they’re projects I use and/or care about, because it improves my skills, because I want a feature implemented or bug fixed so I fix it myself, because the community is awesome, and because I like being a small part of something big and awesome! I think it’s a good use of my time and knowledge, both for my own development and the community. Because of this, I plan to keep contributing for years to come!

by Errietta Kostala at October 05, 2014 08:26 PM

October 03, 2014

Md's blog

15 years of whois

Exactly 15 years ago I uploaded to Debian the first release of my whois client.

At the end of 1999 the United States Government forced Network Solutions, at the time the only registrar for the .com, .net and .org top level domains, to split their functions in a registry and a registrar and to and allow competing registrars to operate.

Since then, two whois queries are needed to access the data for a domain in a TLD operating with a thin registry model: first one to the registry to find out which registrar was used to register the domain, and then one the registrar to actually get the data.

Being as lazy as I am I tought that this was unacceptable, so I implemented a whois client that would know which whois server to query for all TLDs and then automatically follow the referrals to the registrars.

But the initial reason for writing this program was to replace the simplistic BSD-derived whois client that was shipped with Debian with one that would know which server to query for IP addresses and autonomous system numbers, a useful feature in a time when people still used to manually report all their spam to the originating ISPs.

Over the years I have spent countless hours searching for the right servers for the domains of far away countries (something that has often been incredibly instructive) and now the program database is usually more up to date than the official IANA one.

One of my goals for this program has always been wide portability, so I am happy that over the years it was adopted by other Linux distributions, made available by third parties to all common variants of UNIX and even to systems as alien as Windows and OS/2.

Now that whois is 15 years old I am happy to announce that I have recently achieved complete world domination and that all Linux distributions use it as their default whois client.

October 03, 2014 05:32 AM

September 29, 2014

Md's blog

CVE-2014-6271 fix for Debian woody, sarge, etch and lenny

Very old Debian releases like woody (3.0), sarge (3.1), etch (4.0) and lenny (5.0) are not supported anymore by the Debian Security Team and do not get security updates. Since some of our customers still have servers running these version, I have built bash packages with the fix for CVE-2014-6271 (the "shellshock" bug) and Florian Weimer's patch which restricts the parsing of shell functions to specially named variables:

This work has been sponsored by my employer Seeweb, an hosting, cloud infrastructure and colocation provider.

September 29, 2014 08:51 AM

September 26, 2014

RichiH's blog

Release Critical Bug report for Week 39

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1393
    • Affecting Jessie: 408 That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 360 Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 50 bugs are tagged 'patch'. Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 20 bugs are marked as done, but still affect unstable. This can happen due to missing builds on some architectures, for example. Help investigate!
        • 290 bugs are neither tagged patch, nor marked done. Help make a first step towards resolution!
      • Affecting Jessie only: 48 Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 0 bugs are in packages that are unblocked by the release team.
        • 48 bugs are in packages that are not unblocked.

Graphical overview of bug stats thanks to azhag:

by Richard &#x27;RichiH&#x27; Hartmann at September 26, 2014 08:45 PM

September 20, 2014

freenode staffblog

Server issues

Earlier today the freenode infra team noticed an anomaly on a single IRC server. We have since identified that this was indicative of the server being compromised by an unknown third party. We immediately started an investigation to map the extent of the problem and located similar issues with several other machines and have taken those offline. For now, since network traffic may have been sniffed, we recommend that everyone change their NickServ password as a precaution.

Before changing your password, please check your email address in /msg nickserv info and, if needed, update it – see /msg nickserv help set email (remember to check your new email for the verification key). This will ensure that we can send you a password reset email should, for whatever reason, your password change not work properly. If you have no email set on your account or an email set that you cannot access, we cannot send password resets to you, so do please keep this up-to-date.

To change your password use /msg nickserv set password newpasshere

Since traffic may have been sniffed, you may also wish to consider any channel keys or similar secret information exchanged over the network.

We’ll issue more updates as WALLOPS and via social media!

by mrmist at September 20, 2014 09:02 AM

September 16, 2014

erry's blog

Raspberry pi router

Yesterday I made my Raspberry Pi function as a router! It took me a long time, mostly because I was using my own custom compiled kernel (don’t worry, you don’t have to do that). There’s probably already enough blogs on the subject, but I thought I’d make one, too!


  • Raspberry pi (duh)
  • For Ethernet routing:

  • An ethernet switch
  • IPTables – This comes with the stock raspberry pi kernel, so you shouldn’t have a problem if you’re not using your own like I do
  • udhcpd, if you want clients to get addresses over dhcp
  • For wireless routing

    The above, plus:

  • hostapd
  • haveged may be required to generate entropy if wireless is being very slow
  • A supported wireless adapter (I have RT5372). this post lists what you can use (and is another decent tutorial). What you need is an adapter that can do access point mode. You can apt-get install iw then iw list and look for ‘AP’ in ‘Supported interface modes’ to determine if your adapter supports it.

Getting ready

If you have a custom kernel like I do, now is probably the time to re-compile it if it doesn’t already come with what you need. If not, skip this paragraph and the next. Your kernel needs IP tables and drivers for your wireless card, if doing wireless routing. I spent a lot of time finding the right options, and don’t want anybody else to go through the same pain, so I’m providing my kernel compliation .config file. Note that you’ll probably need to build on top of it to get the right drivers if doing wifi and not using the RT5372 chipset.

The most important options for IP tables if compiling are the *_NF_*, *IPV4*, *NET* and *INET* options I have selected in my config. If you want to do it on your own, make sure at least networking, network filtering, IP tables, IPV4 connection tracking, conntrack, and IPV4 NAT are enabled. In the GUI tool for the config you can go to edit->find to find what you need and it gives you some information of where the option is and what it requires. Note that some options require others to be selected before they even show in the configuration tool which is really annoying.

If you’re doing wireless routing, the first thing to do is to make sure your wifi is working – is it showing wlan0 in ifconfig -a? Does sudo iw dev wlan0 scan bring back a list of wireless networks? Does connecting to one work? If yes, good. If not, look at dmesg and try to find out what’s wrong. For example, I needed the firmware-ralink package to get my card to work.

Now that IP tables and your wireless card are working, you can set up the router!

Ethernet routing

You need to run the following as root.

First let’s give ourselves an IP address that we will use on our NAT:

ip link set up dev eth0:1
ip addr add dev eth0:1 # You can change the IP address here

Make sure packet forwarding is enabled:

sysctl net.ipv4.ip_forward=1

Set up forwarding:

iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
iptables -A FORWARD -i eth0:1 -o eth0 -j ACCEPT
iptables -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT

Now, if you connect another device on the network and give it a 192.168.4.* address, setting as the gateway, you should have Internet access routed to it!

If it’s working, and you want to make your changes permanent, edit /etc/network/interfaces

    # Internet from the wall, DHCP
    auto eth0
    allow-hotplug eth0
    iface eth0 inet dhcp

    # Static IP address for your pi router
    auto eth0:1
    iface eth0 inet static

Then, edit /etc/sysctl.d/30-ipforward.conf to permanently allow IP forwarding:


Save IP tables rules:

iptables-save > /etc/iptables/rules

Now edit /etc/rc.local. Before exit 0 you can add this:

/sbin/iptables-restore < /etc/iptables/rules

And your rules will be restored on boot.

Wireless routing

Make sure hostapd is installed. Edit /etc/hostapd/hostapd.conf, change options as appropriate:

### Wireless network name ###
## This is required ##
## Key management algorithms ##
## Set cipher suites (encryption algorithms) ##
## TKIP = Temporal Key Integrity Protocol
## CCMP = AES in Counter mode with CBC-MAC
## Shared Key Authentication ##
## Accept all MAC address ###
## Most cards work with this ##

Now, similar to before with ethernet routing:

ip link set up dev wlan0
ip addr add dev wlan0 # You can change the IP address here

If you ran the iptables commands for ethernet forwarding before, you can run only the second command here:

iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
iptables -A FORWARD -i wlan0 -o eth0 -j ACCEPT
iptables -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT

Hopefully, sudo hostapd /etc/hostapd/hostapd.conf will start hostapd up without errors. If so, you can edit /etc/default/hostapd and set DAEMON_CONF="/etc/hostapd/hostapd.conf" if you want it to start automatically.

You should be able to see a wireless network with the name you gave above. Connect a client to the wireless network – if you’ve installed the dhcp server it should automatically get an address but if not give it a 192.168.123.* address and set as the gateway. Hopefully you have internet access!!!

If you want the changes to be permanent, see the wired NAT guide above and make the appropriate changes.


As you might imagine, not too impressive. The raspberry pi ethernet port is backed via usb, and my usb wireless adapter isn’t fast enough for wireless routing. For me, wired routing works pretty well – I don’t see a difference between using my raspberry pi as a router and connecting directly to the wall but note that I only have a 10mbps speed anyway. However, wireless routing although it works ‘hangs’ and becomes slow when transferring any non-trivial amount of data, such as downloading files. Still an interesting experiment to try though!

by Errietta Kostala at September 16, 2014 09:09 AM

September 12, 2014

RichiH's blog

Release Critical Bug report for Week 37

Remember, remember; the fifth of November.

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1422
    • Affecting Jessie: 410 That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 355 Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 52 bugs are tagged 'patch'. Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 26 bugs are marked as done, but still affect unstable. This can happen due to missing builds on some architectures, for example. Help investigate!
        • 277 bugs are neither tagged patch, nor marked done. Help make a first step towards resolution!
      • Affecting Jessie only: 55 Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 0 bugs are in packages that are unblocked by the release team.
        • 55 bugs are in packages that are not unblocked.

Graphical overview of bug stats thanks to azhag:

by Richard &#x27;RichiH&#x27; Hartmann at September 12, 2014 08:15 PM

August 28, 2014

erry's blog

Recovering data from an old encrypted home

Need to copy files from an old home encrypted with ecryptfs? Whether you’re doing it off a livecd or a new installation, at least in Ubuntu 14.04 it’s simple.

First, install ecryptfs-recover-private

$ sudo apt-get install ecryptfs-recover-private

Now, ensure the device you want to recover from is mounted:

/dev/sdXY is the device e.g. /dev/sda1

$ sudo mount /dev/sdXY /mnt/old-home

Now you need to point ecryptfs-recover-private to the disk’s /home/.ecryptfs/<your-user>/.Private. e.g.:

$ sudo ecryptfs-recover-private /mnt/old-home/home/.ecryptfs/errietta/.Private

Just follow the prompt from the previous command.
If wrapped-passphrase is existant in the directory, you will be prompted for your login passphrase. Otherwise, (or if you forgot your login passphrase) you need to have the encryption key that was created when first setting up ecryptfs.

INFO: Found [.../.ecryptfs/errietta/.Private].
Try to recover this directory? [Y/n]: Y
INFO: Found your wrapped-passphrase
Do you know your LOGIN passphrase? [Y/n] Y
INFO: Enter your LOGIN passphrase...
Inserted auth tok with sig [...] into the user session keyring
INFO: Success!  Private data mounted at [/tmp/ecryptfs.2eLhj8mU].

Voilá! You now have access to your old home in the directory mentioned in the last line of that command

by Errietta Kostala at August 28, 2014 12:04 PM

August 25, 2014

erry's blog

IRC isn’t just old school: enriching community through text-based chat

<style> body{ font-family: Arial, sans-serif; line-height: 160%; }

p{ padding: 0.25em 0; line-height: 160%; }

hr{ margin: 0; margin-top: 20px; line-height: 0; height: 0; border: 1px solid #ddd; }

blockquote { margin: 20px 0 20px 0; border-left: 5px solid #ddd; padding: 0 0 0 15px; }

ol, ul{ margin-bottom: 1em; padding-bottom: 1em; } </style>

IRC is a great medium for communities to get together, answer users’ questions and collaborate. Although it may seem primitive, its low-bandwidth consumption and wide variety of ways to access it make it an ideal way for people to connect from any location or background.

There are virtually unlimited clients (programs) available to connect to IRC networks. Their UI does the work of presenting the protocol in a friendly format, tailored to the user’s needs.

Many popular FOSS projects are on IRC already, and it is likely that most of the open source software you use will have their own channel (or channels) for discussion.

A channel’s name will almost always begin with a #, such as #myChannel. Users are usually allowed to create their own channels. You can also PM (Private Message) other users which allows you to talk in private with a specific user.

Aside from chatting, IRC allows further features & extensions by 3rd party tools and services. For example, some individuals & companies run bouncers. Bouncers stay permanently online and connect to whichever IRC network and channels you wish. When you come back online, they replay your messages back to you. This allows you to maintain a constant connection to IRC without actually keeping your personal machine online all the time.

Also, most IRC networks have services, which are bots that appear as fellow users. They help you register nicknames and channels, and help with other network-related tasks—but more on that later.

Although IRC use in general has been declining, the use of IRC networks meant for project collaboration, like freenode, has increased consistently.

Connecting to a network

Just as different websites hold several individual pages, there are many IRC networks, each with their own channels.

freenode (IRC address: irc:// is an IRC network used by many popular open source projects to provide support for their communities, discuss development, and collaborate. Mozilla projects, meanwhile, are on Mozilla’s own IRC network, irc://

Most clients have a graphical interface that allows you to choose a network, and define other networks if they don’t appear on the pre-populated list. Most clients (even graphical) still process text commands, such as the /server <server address> or /connect <server address> commands to connect to a new server.

Xchat's network list

Xchat’s network list

Xchat connected to freenode

Xchat connected to freenode

They will usually interpret anything beginning with / as a command to be handled by the client or sent to the server. So if you type /server or /connect in your client’s input box, you will be connected to freenode. Anything not beginning with a / will be sent to the currently selected target, a channel or other user.

Irssi doesn’t have a graphical interface and uses the <code>/connect</code> command to get on networks.

Irssi doesn’t have a graphical interface and uses the /connect command to get on networks.

Irssi connected to freenode

Irssi connected to freenode

Similarly, /server or /connect will connect you to mozilla’s IRC network.

Adding moznet as a network to xchat

Adding moznet as a network to xchat

xchat connected to moznet

xchat connected to moznet

Connect to moznet and/or freenode. Configure your IRC client and save these settings if necessary.

Finding channels for projects


Many open source projects use freenode to host channels for communication.

On freenode, you can search for channels using one of its services (more on services later…), ALIS (Advanced channel LIsting Service).

For example, to search for channels with a name matching ‘science’, you’d use the command:

/msg alis list *science*

The /msg command just sends a message to the user in the next parameter. ALIS is the service, represented by a user in the network. list *science* is the command we’re sending it. The wildcards (*) are needed, because channels have at least one “#” character in the beginning of the name (i.e. science wouldn’t match anything, as the channel is “##science”) and because we want to match names that are like ”#science-something” as well.

The above command should make alis send you a notice with the data:

10:28 -*alis(alis@services.)- Returning maximum of 60 channel names matching '*science*'
10:28 -*alis(alis@services.)- ##askscience                                        23 :Welcome to the official Askscience IRC channel (
10:28 -*alis(alis@services.)- ##cognitivescience                                   
10:28 -*alis(alis@services.)- ##science                                           81 :Welcome to the science channel! | Rules: | Channel Survey: | Topics that are ...
10:28 -*alis(alis@services.)- End of output
The output after running <code>/msg alis list *science*</code></p><p>You can see the names of channels, users, and topics there

The output after running /msg alis list *science*

You can see the names of channels, users, and topics there

Let’s say, for example, that you want to join one of these channels, like ##science. Just type /join ##science, and a window should pop up in your client! Depending on your client, this could look any number of ways, but if you join more than one channel, you will usually be able to see each channel (usually as tabs) in the client and switch between them. You can then just type your message in the ##science window, and all other joined users will receive it!

xchat joined to the ##science channel, you can see the option to switch between “freenode” (the server window) and “##science” (the channel).

xchat joined to the ##science channel, you can see the option to switch between “freenode” (the server window) and “##science” (the channel).

 Irssi is a little different, windows are numbered from 1 to whatever the last window is, and window numbers flash when there is activity.<

Irssi is a little different, windows are numbered from 1 to whatever the last window is, and window numbers flash when there is activity.

Moznet doesn’t have ALIS to search for channels, but you can type /list *channel* to find all channels matching *channel*. But be careful with the /list command; on large networks like freenode and moznet, you will be spammed with results if you do it with no parameters.

Most Mozilla projects will mention in their wiki what their moznet channel is, if there is one. Additionally, you can see a list of the project channels on the Mozilla Wiki. You can always try running /join followed by the project name as well. For example /join #firefox will get you to the channel for firefox and /join #webmaker will get you to the channel for webmaker.

Although there are many IRC networks, most communities that use IRC will specify which network their channel is on, so you don’t have to play guesswork with the network name. There are, however, tools to search for IRC networks such as

On the project website

Most projects will have a “contact” or “support” page that will link them to their IRC channel. For example, jquery has which explains #jquery is the support channel for jquery on freenode. You’d just type /join #jquery in your IRC client, and ask questions there!

Find your favourite community’s channel!


Services are users/bots that take commands and enhance the IRC experience. We talked about ALIS earlier, this is one of the services available in freenode.

The most useful services are NickServ, ChanServ, and on networks that offer it, ALIS.


NickServ allows registering your account, so that your nickname is yours and only yours. You associate a password with it, and use that password to identify yourself when you return.

Because some networks vary, /msg NickServ HELP will get network-specific NickServ examples, but the following method for registering works on both freenode and moznet:

/msg NickServ REGISTER <password> <email>

This will register a nickname (replace “” and “” with a password of your choosing, and a valid email that is preferably your own)

You will get a confirmation from NickServ telling you if you succeeded or typed the command wrong. You may also have to access your email inbox to confirm your registration.

After registration is complete, you can use the command /msg NickServ IDENTIFY <password> to register yourself and keep your nickname every time you log in. Again (if you replace “password” with your password), NickServ will give you confirmation that you’ve been logged in.

NickServ registration is super important as it prevents users from impersonating you, ensures you keep your nick, and allows you to manage channels.

Registering a nickname on freenode. You can see the notices NickServ sent, shown as “-NickServ- message here” and the messages we sent to NickServ as ”>Nickserv< message here”

Registering a nickname on freenode. You can see the notices NickServ sent, shown as “-NickServ- message here” and the messages we sent to NickServ as ”>Nickserv< message here”


  • Register your nickname on moznet and/or freenode!
  • Play around with nickserv and find what else you can do!
    For example, try to set some metadata on your account (like the one I have in mine /msg NickServ INFO erry) with /msg nickserv HELP SET PROPERTY

Seeing information about users

You can use the /whois command to learn some information, such as if someone is identified to nickserv. For example, /whois errietta will produce the following:

* [errietta] (erry@freenode/staff/erry): Errietta Kostala (
* [errietta] #perl 
* [errietta] :US
* [errietta] is using a secure connection
* [errietta] is logged in as erry
* [errietta] End of WHOIS list.

You can see this user is identified by the penultimate line: * [errietta] is logged in as erry – if they were not identified, it would not be there.

Additionally, you can use /msg nickserv info followed by the user name. The output of this command will likely vary between networks. On freenode, you’ll see the following for the command /msg nickserv info tt :

-NickServ- Information on tt (account tt):
-NickServ- Registered : Jun 13 20:14:05 2009 (5 years, 10 weeks, 1 day, 14:59:52 ago)
-NickServ- User reg.  : May 27 18:45:23 2006 (8 years, 12 weeks, 5 days, 16:28:34 ago)
-NickServ- Last seen  : now
-NickServ- Flags      : HideMail, Hold, Private
-NickServ- tt has enabled nick protection
-NickServ- *** End of Info ***

Not only can you can see when they registered their account, but the last time they were online. “last seen: now” means they are currently online.

See your own NickServ info, or somebody else’s.


As discussed above, ALIS is useful for searching for channels on freenode. Running /msg alis help list is a great way to learn how it works.

Basic command use:

/msg alis list *channelname* -topic *topic* -min 50

Will match all channels with the name matching “channelname”, the topic matching “topic”, and with a minimum of 50 users

Channels matching name and topic “linux” with a minimum of 500 users

Channels matching name and topic “linux” with a minimum of 500 users

Find more channels with alis. Try playing around with the parameters to limit your search more.

ChanServ / Registering a channel for your community

Just as NickServ allows registering your nickname, ChanServ allows registering channels. If you run a neat open source project, and you want to claim a channel for your users to contact you, you can register it on a network like freenode. (Read the network’s policies & guidelines first though—freenode’s can be found at “registering a channel on freenode”.)

First, /join an empty channel, like you would join any other channel. /join #MyAwesomeNewProject

Then, you can /msg chanserv register #MyAwesomeNewProject (this commands syntax varies per network, see /msg chanserv help)

ChanServ will tell you if you succeeded or failed, like NickServ above.

That’s all! You’re now the proud owner of a channel and you can direct your users to it. Most networks have webchat interface you can take advantage of, and you can also mention the IRC network and channel name you are using in your website, mailing list, twitter profile, etc.

Registering awesome channel

Registering awesome channel

A “different user” who is connected through webchat.

A “different user” who is connected through webchat.


  • If you have a community/project you want to register a channel for, go ahead and do that.
  • Play with /msg ChanServ HELP and /msg ChanServ HELP SET and see what settings you can set.

Getting help

IRC commands can be intimidating to some people when they first encounter them, but they’re not that different from normal terminal commands; you get used to the idea eventually.

If you need help with anything related to IRC, you can usually join an IRC network’s #help channel if you need help with using the IRC network itself. Mozilla also has this handy wiki page:

Additionally, help documentation with examples is almost always built in. In our alis example above, running /msg alis help list will return help for the list command.

 -alis-  ***** alis Help *****
 -alis-  Help for LIST:
 -alis-  LIST gives a list of channels matching the
 -alis-  pattern, modified by the other options.
 -alis-  The pattern can contain * and ? wildcards.
 -alis-  Options are:
 -alis-      -min <n>: minimum users in channel
 -alis-      -max <n>: maximum users in channel
 -alis-      -skip <n>: skip first <n> matches
 -alis-      -show [m][t]: show modes/topicsetter
 -alis-      -mode <+|-|=><modes>: modes set/unset/equal
 -alis-      -topic <pattern>: topic matches pattern
 -alis-  Syntax: LIST <pattern> [options]
 -alis-  Examples:
 -alis-      /msg alis LIST * -min 50
 -alis-  ***** End of Help *****

Messaging services and experimenting with the term “help” is always a good idea. It’s hard to break anything, and easy to learn how to get more use out of these tools.


  • Look at other services, like MemoServ. Can you find the HELP command on how to use it and find out how to send a memo to someone?

by Errietta Kostala at August 25, 2014 08:52 PM

August 13, 2014

RichiH's blog

Slave New World

Ubiquitous surveillance is a given these days, and I am not commenting on the crime or the level of stupidity of the murderer, but the fact that the iPhone even logs when you turn your flashlight on and off is scary.

Very, very scary in all its myriad of implications.

But at least it's not as if both your phone and your carrier wouldn't log your every move anyway.

Because Enhanced 911 and its ability to silently tell the authorities your position was not enough :)

by Richard &#x27;RichiH&#x27; Hartmann at August 13, 2014 06:39 PM

August 08, 2014

RichiH's blog

Microsoft Linux: Debian



(Yes, I am on Debian's trademark team and no, I have no idea what that means. Yet.)

Update: Thanks to Marcin and Steven Chamberlain for this find: It seems Debian Red is an actual name used by desginers.

by Richard &#x27;RichiH&#x27; Hartmann at August 08, 2014 01:25 PM

July 25, 2014

RichiH's blog

Release Critical Bug report for Week 30

I have been asked to publish bug stats from time to time. Not exactly sure about the schedule yet, but I will try and stick to Fridays, as in the past; this is for the obvious reason that it makes historical data easier to compare. "Last Friday of each month" may or may not be too much. Time will tell.

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1511
    • Affecting Jessie: 431 That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 383 Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 44 bugs are tagged 'patch'. Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 20 bugs are marked as done, but still affect unstable. This can happen due to missing builds on some architectures, for example. Help investigate!
        • 319 bugs are neither tagged patch, nor marked done. Help make a first step towards resolution!
      • Affecting Jessie only: 48 Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 0 bugs are in packages that are unblocked by the release team.
        • 48 bugs are in packages that are not unblocked.

Graphical overview of bug stats thanks to azhag:

by Richard &#x27;RichiH&#x27; Hartmann at July 25, 2014 09:58 PM

mquin's blog

Couch to 5k - the view from halfway

Back in June, inspired by the hint of some pleasant weather and the understanding that I'm perhaps not as active as I ought to be, I bought a pair of running shoes and started following the popular "Couch to 5k" training programme.

The programme starts out with short periods of jogging interspersed with slightly longer periods of walking, and over the subsequent weeks the time spent jogging is increased steadily, with the aim - as laid out in the name - of allowing a fairly sedantary person with no running experience to progress to covering a 5km run over the space of nine weeks. Being a little bit active I'm probably not in the typical couchbound start group, but I've not really done any running since high school.

Now that I've finished week 5 - more than halfway through - I figured I write a bit about the experience.

On day one I didn't manage to finish the routine as laid out and was wondering whether I'd just wasted the cost of a pair of shoes.

Looming large duirng the early weeks was thr worry that my damaged hip might not be able to deal with jogging, however despite somme initial discomfort this hasn't turned out to be a problem, and I'd go as far as to say that the level of discomfort (which has been variable, but constant, pretty much since I broke the hip) is lower now than it has been in the past.

So far, I've found the progression to be perfect - the workouts have been challenging, but I've managed, with a bit of willpower, to complete all of them so far, and I'm feeling positive about the remaining weeks.

The last week has been the hardest so far - While the first four involved doing the same routine three times, week five ramps up this distance covered at one time considerably, from 5 minutes at the start of the week to 20 at the end (and, as my luck would have it, the long run fell on the warmest day as well).

July 25, 2014 05:54 PM

July 17, 2014

erry's blog

5 Excellent developer tools features

These are five features use often (apart from the last, never used that one!) about the developer tools in Firefox and Chrome. I thought I would blog about the similar features in both browsers instead of focusing on one of them, and show how to use them in both of them.

Emulating element states (hover, focus, etc.)

One of my favourite features, which makes things much less a pain, is the ability to trick the browser to think that you’re hovering over an element until you remove the flag. You can then freely inspect any :hover css code and see how it’s (not) working.

In chrome dev tools, after you inspect an element, you can right click on the element (make sure to not do this on the body/text or it won’t work) and click on ‘force element state’, then the state you want (e.g. ‘:hover’)

Chrome dev tools - hover

Chrome dev tools – hover

Now, you should see your stylesheet for the :hover state!

Hover preview - Chrome

Hover preview – Chrome

Firefox can do the same by simply right clicking an element and clicking ‘hover’, it again allows you to see the effect of your CSS and see the new :hover code applying.

Firefox hover preview

Firefox hover preview

Device emulation

In chrome, you can toggle the bottom pane by pressing ESC on developer tools. Once open, you can choose the ‘emulation’ panel. This gives you several options: from screen sizes, to user agents, to whole device setups! Perfect for testing responsive websites.
You can also emulate touch events with your mouse there.

Chrome emulating a nexus 5

Chrome emulating a nexus 5

In firefox, there is a little icon on the element inspector that toggles responsive design mode

firefox responsive design

firefox responsive design

Once clicked, you can again see your website in a variety of sizes.
Firefox emulating a smaller screen

Firefox emulating a smaller screen

There’s a dropdown to select a screen size, and you can toggle touch event emulation again or take a screenshot. My favourite feature there though is the ability to rotate the screen with a button!


Chrome has a networking tab in its console which allows you to see all your requests and how much time each request took. Apart from being a way to see if you have any slow pages, it also helps diagnose errors such as missing assets and see AJAX requests.

All requests

All requests

It also shows details for a specific request when clicked, making it easy to see request headers and body – very useful for debugging POST requests.

Request details in chrome network panel

Request details in chrome network panel

Finally, there’s the ‘audits’ panel which offers some advice on making your page faster, sort of like YSlow and similar extensions.

Audit panel

Audit panel

Firefox offers the same functionality in its own networking panel:

Firefox networking

Firefox networking

Viewing a request

Viewing a request

And if you click the little clock on the bottom right, you can see a breakdown of the various page elements and how much time they take

My only regret about going back to student accommodation for my year in industry is that I will greatly miss my parents' fibre optic connection.

My only regret about going back to student accommodation for my year in industry is that I will greatly miss my parents’ fibre optic connection.

Style editor

If you open a CSS file in sources, chrome allows you to make changes to it on-the-fly, and see if they apply. This, of course, is in addition to being able to see and edit any styles that apply to an element back in the elements panel!

Chrome style editor

Chrome style editor

Firefox has a style editor pane that allows the same, as well as adding new stylesheets and importing stylesheets, which is even better.

Firefox style editor

Firefox style editor

Shader editor

This is a firefox feature I only found out about while writing this post! If you click the little wrench icon in the top left of firefox dev tools, you can enable the shader editor. If you go to a page that uses WebGL afterwards (such as HelloRacer) you can see and edit shader code. Pretty neat!

Shader editor

Shader editor

Are there any features you use a lot? Do leave a comment!

Thanks for reading, and see you until next time.

by Errietta Kostala at July 17, 2014 07:46 PM

June 28, 2014

erry's blog

My first activity as a STEM ambassador

A couple days ago (as of writing this, at least!) (on the 26th of June, 2014), I went out to a school with a staff member from University and supported a session on Raspberry pi. The session was part of a club taking place during a week where the students didn’t have regular classes and lasted the whole day – it was part of a whole week of sessions on Linux, Raspberry pi, computing, and programming done for that club.

My role as a STEM ambassador for the activity was to help out in the session when students had trouble doing something. I’m pleased to say, however, that that didn’t happen too much: the students were very awesome and learned so quickly!

They were first handed out a raspberry pi each with raspbian and other required software pre-installed and were shown how to remote control the pi with ssh and vnc and run an X session remotely without a monitor attached to the pi. They actually managed to run zenmap and find all the IP addresses of the raspberry pi, then pointed a browser to the IP addresses to find the one where they had written their name on apache’s default index page!

Each student remote controlled their own pi

Each student remote controlled their own pi!

They also got to log in to a single raspberry pi and run as many applications as possible to see how much it could take – I was surprised to see that even with 8 different tightvncserver sessions and several applications running, it could take the load up to a point!

A debian machine remote controlling a raspberry pi

A debian machine remote controlling a raspberry pi

Run ALL the apps!

Run ALL the apps!

Finally, there was a short session on HTML and JavaScript and building a simple application that would first just read a number from a textbox and then read 2 numbers and add them (it was supposed to be later developed into a full calculator). Again, I was pleasantly surprised to see how quickly they were able to figure out how to do this.

Overall, I was blown away by how organised the teacher and school was for the activity, and by how well-behaved and quick to learn the students were. We can definitely hope for a generation of more people interested in STEM fields if programs like this keep up!

I had an awesome time as a STEM ambassador for the activity, and I’m looking forward to supporting more activities in the future when I have time. I encourage anyone with an interest in STEM and getting more young people into the field to be an ambassador, it’s an amazing experience!

(Copyright info: You’re welcome to use the post text under CC-BY like my other posts, but please don’t use the pictures without permission as they weren’t taken by me)

by Errietta Kostala at June 28, 2014 06:30 PM

June 18, 2014

freenode staffblog

New extban: $j

We have loaded a new module on the network which provides the $j extban type:

$j:<chan> – matches users who are or are not banned from a specified channel

As an example…

/mode #here +b $j:#timbuktu

…would ban users from #here that are banned (+b) in #timbuktu.

Please note that there are a couple of gotchas:

  • Only matching +b list entries are checked. Quiets (+q) Exemptions (+e) & invexes (+I) are NOT then considered. As such, the following mode change would not alter the behaviour of the first example:

/mode #timbuktu +e *!*@*

  • Quiets and the quieting effect of bans may not immediately take effect on #here when #timbuktu’s ban list changes due to caching by the ircd.
  • $j isn’t recursive. Any $j extbans set in #timbuktu are ignored when matching in #here.

We imagine you’ll have some more useful use cases than the above.

Thanks for flying freenode!

by Pricey at June 18, 2014 09:34 PM

June 07, 2014

erry's blog

CSS sibling selectors

I recently discovered CSS sibling selectors, and an awesome way to take advantage of them. Needless to say, mind was blown so I had to blog about it!

First of all, about sibling selectors themselves: they match an element that’s on the same level as another element, either directly after it (adjacent sibling selector) or anywhere in the same level (general sibling selector).

For example, given the following CSS code:

h1 + p { color: red; }

And the following HTML:

  <p>This is red</p>
  <p>This isn't</p>

The first paragraph will be red because it’s right after an h1.

Likewise, if we use the general sibling selector:

h1 ~ p { color: red; }
  <div>We can even have another element in between.</div>
  <p>This is red.</p>
  <span>It doesn't matter if they're right next to each other as long as they're on the same level</span>
  <p>This is also red!</p>
  <div><p>However, this isn't red</p></div>
<p>this isn't red</p>

As you can see, any <p> tag that follows an <h1> tag in the same level of the DOM will match the CSS rule.

So, what’s so cool about this? How can you take advantage of it? Imagine the following scenario:

    <h1>Supernatural Directory</h1>
    <p>Match only:</p>

    <input type="radio" name="supernatural" id="all" checked/> <label for="all">All</label>
    <input type="radio" name="supernatural" id="werewolves" /> <label for="werewolves">Werewolves</label>
    <input type="radio" name="supernatural" id="fairies" /> <label for="fairies">Fairies</label>
    <input type="radio" name="supernatural" id="vampires" /> <label for="vampires">Vampires</label>

    <h1 class="directory werewolves all">Werewolves</h1>
    <ul class="directory werewolves all">
        <li>Rodney Robinson</li>
        <li>Matthew Phillips</li>
        <li>Rebecca Vasquez</li>
    <h1 class="directory fairies all">Fairies</h1>
    <ul class="directory fairies all">
        <li>Deborah Wong</li>
        <li>Sara Douglas</li>
        <li>Jean Gutierrez</li>

    <h1 class="directory vampires all">Vampires</h1>
    <ul class="directory vampires all">
        <li>Jesse Peters</li>
        <li>Michael Hunter</li>
        <li>Willie Washington</li>

We obviously want to show only one of the categories by pressing one of the radio buttons. Obviously, this is easy enough to do with javascript, but did you know you can do it with just CSS? By taking advantage of sibling selectors, you can!

First, let’s hide all the .directory elements:

.directory {
    display: none;

Now, if we do this:

#all:checked ~ .all {
    display: block;

This means that all elements with the ‘all’ class after the checked ‘all’ radio button will be visible! If you use this code, your whole directory will be visible again since ‘all’ is checked by default, and when you select another category nothing should be visible.

You probably know what to do from here, but if you just change that CSS to:

#all:checked ~ .all, #werewolves:checked ~ .werewolves, #fairies:checked ~ .fairies, #vampires:checked ~ .vampires {
    display: block;

Now, by toggling one of the checkboxes, the right category will toggle! This is perfect for making some effects without javascript.

In closing I’d like to give credit to the book that taught me this trick, “Responsive Web Design by Example” by Thoriq Firdaus. I think it’s a decent book for people like me that are technical but not that good at design, it teaches you to make some sexy-looking websites, and I hope that it’ll help me improve my design skills!

Until next time,

by Errietta Kostala at June 07, 2014 11:25 AM

May 17, 2014

RichiH's blog

git-annex corner case: Reviving a dead remote

Quoth joeyh:

Note that recent versions of git-annex have a reinit command that makes this much simpler:

git clone /existing/repo.annex

git annex info # to refresh your memory about uuid or description of lost repo

git annex reinit uuid|description

Old Post:

Another half a blog post, half a reminder for my future self.

Turns out that pulling an external 3.5" disk off of your nightstand with the help of a tangled USB cable is a surprisingly efficient way to kill a it. On the plus side, this yields instant results and complete success.

On the other hand, I kinda lost well over one TiB of personal photographs and other data which I didn't really appreciate a whole lot.

Thankfully, all data was annexed and I always maintain at least three copies of all files at all times, so the fix is as easy as getting two new disks and running

badblocks /dev/foo -swo $manufacturer-$model-$(date '+%F--%H-%M-%S-%Z').badblocks-swo # assuming you keep a git repo of this data

followed by partitioning, mkfs etc and

cd /existing/repo.annex
git annex info # not the UUID you need
cd /new/disk
git clone /existing/repo.annex
cd repo.annex
vim .git/config # add the [annex] block and _copy over the UUID of the lost repo_
git annex get # assuming you want a full copy, which I always do for data archival repos
git annex sync

and done. By running git annex get before git annex sync, I managed to avoid (potentially) saving information about missing data; I simply made sure it was all in place before synching again.

The second external disk allows me to always get one local disk up to speed and another one off-site. I am able to learn from experience ;)

by Richard &#x27;RichiH&#x27; Hartmann at May 17, 2014 07:23 PM

May 15, 2014

RichiH's blog

Interesting debate

It's nice to see that the overall tone of some key debates is slowly changing. The stance of not securing the whole stack piece by piece because some unrelated part in the stack is leaking information as well is steadily losing ground.

Another impact of this whole mess is that you can't really be certain why some parties are arguing against pervasive encryption any more. On the plus side, that makes defaulting to safety simpler.

by Richard &#x27;RichiH&#x27; Hartmann at May 15, 2014 10:55 AM

April 26, 2014

freenode staffblog

April 1st 2014, Followup

It’s been almost too long for this blog post to arrive here after the April Fools quiz this year. Thanks to everyone who participated!

The first ten people who completed the challenges are, in descending order of aprilness:

(times are listed in UTC)

  1. 2014-04-02T18:25:17 booto
  2. 2014-04-02T23:36:53 Fuchs *

  3. 2014-04-03T00:29:29 furry
  4. 2014-04-03T01:34:18 mniip
  5. 2014-04-03T09:41:38 jojo
  6. 2014-04-03T16:29:51 redi
  7. 2014-04-03T18:57:21 BlueShark
  8. 2014-04-04T15:33:24 larinadavid
  9. 2014-04-04T22:27:20 Omniflux
  10. 2014-04-04T23:02:19 apoc
  11. 2014-04-04T23:13:02 thommey

(*) user opted out of any prizes
There were 25 additional nicks who completed the quiz and made it to the winner’s circle but weren’t fast enough to place in the top 10.

The prizes were cloaks for those in the top-10. In addition to the top-10 cloaks everyone else who finished the challenge that ‘opted-in’ were eligible for the cloak lottery. This was a lottery for 3 runnerup cloaks.

Out of the 25 additional people that completed the challenge, the following 3 won a cloak through the cloak lottery:

  • skasturi
  • danielg4
  • jojoa1997

Here are the riddles and their solutions, in the original order:

  • Level 0
    • The clue was given in the April 1st blog post: IyMjI3hrY2Q=
    • That is the string "####xkcd" encoded using base64.
    • The answer: ####xkcd, which was the first channel in the quiz.
  • Level 1
    • Clue: Tnl2cHItbmFxLU9iby1qbnl4LXZhZ2Itbi1vbmU=
    • This is a rot13‘ed and base64’ed string.
    • In Python: "Tnl2cHItbmFxLU9iby1qbnl4LXZhZ2Itbi1vbmU=".decode('base64').decode('rot13')
    • The answer: ####Alice-and-Bob-walk-into-a-bar
  • Level 2
    • Clue: MKWkpKMa
    • This is another string that is encoded with a series of base64 and rot13 transformations.
    • In Python: "MKWkpKMa".decode('rot13').decode('base64').decode('rot13')
    • The answer: ####reddit
  • Level 3
    • Clue: SHg5RkR4SUpIeHFGSnlXVUlJSVFJeHFKCg== | Save this for a later level: | 4 decodes needed
    • Yet another string encoded with a series of base64 and rot13 transformations.
    • In Python: "SHg5RkR4SUpIeHFGSnlXVUlJSVFJeHFKCg==".decode('base64').decode('rot13').decode('base64').decode('rot13')
    • Contestants were expected to do a web search for this and find out it is the end of the Zodiac Killer’s infamous message.
    • The answer: ####zodiac
  • Level 4
    • Clue: | LaTeX right direction | Google! | No maths needed
    • The topic changed several times as contestants seemed pretty stumped on this level, the topic line above was its final form.
    • The answer: ####exner – this was expected from figuring out what the equation is. Simply put, the equation in the image is Exner’s Equation.
  • Level 5
  • Level 6
    • Clue: (verify the file, sha256sum: 0efade1bb29d1b7fdd65e5612159e262cbd41a2e27ed89a0144701a5556da68f)
    • This file is more stenography:
      • Use ‘file‘ to determine what the file type is.
      • Un-7zip the .unknown file
      • Base64 decode the output
      • Use ‘file’ to determine that the output is a .jpg
      • Unzip the .jpg
      • Untar two.tar.gz
      • Open the surprised.txt file.
    • The content of surprised.txt is: ####ImSoMetaEvenThisAcronym
    • The answer: ####ImSoMetaEvenThisAcronym
  • Level 7
    • Clue: AQwPfPN1ZBXNfvNj4bPmVR4fVQYPfPNlZBXNfvNkAP4jZhXNflOS and “Da Vinci” | Jules Verne | s/.02/.03/ in the decrypted text
    • The clue is base64’ed and rot13’ed. To decode it in Python: print "AQwPfPN1ZBXNfvNj4bPmVR4fVQYPfPNlZBXNfvNkAP4jZhXNflOS".decode('rot13').decode('base64')
    • This yields: 48° 50′ 0″ N, 2° 20′ 14.02″ E
    • These are GPS coordinates for the Paris meridian.
    • From this and the “Da Vinci” clue contestants were expected to find the Wikipedia page about the Rose Line.
    • The specific quote that contestants were suppose to find:
      "Dan Brown simply invented the 'Rose Line' linking Rosslyn and Glastonbury. The name 'Roslin' definitely does not derive from any 'hallowed Rose Line'. It has nothing to do with a 'Rose Bloodline' or a 'Rose Line meridian'. There are many medieval spellings of 'Rosslyn'. 'Roslin' is certainly not the 'original spelling': it is now the most common spelling for the village."[18]


    • The “Jules Verne” clue is suppose to reaffirm to contestants that they were on the right track:
      The competition between the Paris and Greenwich meridians is a plot element in Jules Verne's "Twenty Thousand Leagues Under the Sea", published just before the international decision in favor of the British one.


    • The answer: ####roslin
  • Level 8
  • Level 9
    • Clue: ZCLVLLCOIUTKKJSCEKHHHSMKTOOPBA | OGUCSSGAPVGVLUMBTVOGICUNJDHSTB | RUTJJGNXUNTY | Letters that would repeat in a typical word do not repeat in the key(s), example ‘freenode’ would be ‘frenod’ | |
    • Alright this one is really really really tricky. The topic changed several times.
    • The three strings are encoded with Four-square from the previous level with the same keys.
    • Contestants were expected to use ‘UVB’ and ‘RUSSIA’ as keys for the Four-square cipher.
    • It was expected that contestants arrive at ‘UVB’ from the channel name, ####POVAROVOSOLNECHNOGORSKRUSSIA
    • The former transmitter[27] was located near Povarovo, Russia[28] at 56°5′0″N 37°6′37″E which is about halfway between Zelenograd and Solnechnogorsk and 40 kilometres (25 mi) northwest of Moscow, near the village of Lozhki.


    • The link points to a file that has the “No Q” image from a previous level hidden in it.
    • The “RUTJJGNXUNTY” decrypts to AaronHSwartz
    • The answer: ####AaronHSwartz
  • Level 10
    • Originally this channel (####AaronHSwartz) was suppose to be the winner’s circle, however due to too many people leaking answers and channel names, one more challenge was added.
    • Same cipher as before, this time the keys were ‘DEMAND’ and ‘PROGRESS’
    • Demand Progress is an Internet activist-related organization specializing in petitions to help gain traction for legal movements against Internet censorship and related subjects, started by Aaron Swartz, source.
    • RMS is Richard Matthew Stallman, and ‘Join Us Now and Share the Software’ is an openly licensed song by Richard Stallman.

The topic in ####JOINUSNOWANDSHARETHESOFTWAREWRITTENBYRMS was: Congratulations on solving the freenode’s April Fools 2014 Crypto Challenge | Want MOAR? #ircpuzzles

Congratulations to those who participated this year!

The 25 additional people that completed the challenge:

  • 2014-04-05T04:06:53 knivey
  • 2014-04-05T10:00:12 Tordek
  • 2014-04-05T15:40:50 jacob1 *
  • 2014-04-05T15:48:48 stac
  • 2014-04-05T16:24:01 Changaco *
  • 2014-04-05T17:30:01 Arch-TK *
  • 2014-04-05T17:35:05 ar *
  • 2014-04-05T18:16:20 Weetos *
  • 2014-04-05T18:38:39 nyuszika7h
  • 2014-04-05T18:56:26 vi[NLR]
  • 2014-04-05T19:06:38 tkd *
  • 2014-04-05T21:54:56 Chiyo
  • 2014-04-05T22:46:01 slidercrank
  • 2014-04-05T22:54:10 jojoa1997
  • 2014-04-06T00:55:51 Pixelz *
  • 2014-04-06T02:53:25 Transfusion
  • 2014-04-06T02:58:15 DonkeyHotei
  • 2014-04-06T03:04:01 sdamashek *
  • 2014-04-06T03:07:49 Cypi *
  • 2014-04-06T03:36:03 FXOR
  • 2014-04-06T13:44:35 pad
  • 2014-04-06T19:22:06 skasturi
  • 2014-04-06T19:37:13 Bloodhound
  • 2014-04-07T08:16:22 molly *
  • 2014-04-07T14:42:32 Bijan-E

(*) user opted out of the cloak lottery

by yano at April 26, 2014 06:05 PM

April 18, 2014

RichiH's blog

higher security

Instant classic


NO, there were errors:
The certificate does not apply to the given host
The certificate authority's certificate is invalid
The root certificate authority's certificate is not trusted for this purpose
The certificate cannot be verified for internal reasons

Signature Algorithm: md5WithRSAEncryption
    Issuer: C=XY, ST=Snake Desert, L=Snake Town, O=Snake Oil, Ltd, OU=Certificate Authority, CN=Snake Oil CA/emailAddress=[email protected]
        Not Before: Oct 21 18:21:51 1999 GMT
        Not After : Oct 20 18:21:51 2001 GMT
    Subject: C=XY, ST=Snake Desert, L=Snake Town, O=Snake Oil, Ltd, OU=Webserver Team, CN=www.snakeoil.dom/emailAddress=[email protected]
            X509v3 Subject Alternative Name: 
            email:[email protected]

For your own pleasure:

openssl s_client -connect -showcerts

or just run

echo '
' | openssl x509 -noout -text

At least they're secure against heartbleed.

by Richard &#x27;RichiH&#x27; Hartmann at April 18, 2014 10:22 AM

April 17, 2014

erry's blog

Fallback for HTML5 date input

HTML5 is awesome. It gives us so many things that we previously had to do manually! Unfortunately, not all browsers support it yet.
Personally, I’m eager to use all the new features, but I don’t want to sacrifice browser support. For example, I don’t want to use jquery ui to render a date picker if a browser supports html5 <input type=”date”>.
Fortunately, you can check if a browser supports the HTML5 way, and use Jquery UI even if it doesn’t! I used Modernizr for this, and it’s quite awesome. As you can see, there are many many things it can detect, but for this particular instance, you can build a bundle with only “input types” (I might do a more extensive post on Modernizr later… hmmm).

First of all, build a Modernizr bundle with the options you want (or just “input types” like me) and download the resulting .js file. Assuming you already have jquery ui installed, your code should look like this

        <!-- The real path to the jquery ui css... !-->
        <link href="css/jquery-ui.css" rel="stylesheet" type="text/css" />
        <input type="text" class="date" />
        <!-- Path to jquery & jquery ui !-->
        <script src="js/jquery.js"></script>
        <script src="js/jquery-ui.js"></script>
            $( ".date" ).datepicker();

Which will work fine, but I still prefer the native datepicker where available. This is where Modernizr comes in handy! First of all, load the Modernizr script as well.

        <script src="js/modernizr.js"></script>

You can now use the date input type like you usually would:

        <input type="date" />

As long as you also add this somewhere in your javascript code:

if (! {
    $( "input[type=date]" ).datepicker();

That easy! Now if the date input type isn’t supported, all your <input type=”date”> will automatically use the jquery ui datepicker. Additionally, you will automatically have the native solution when these browsers start supporting it, without you having to change anything else in the future.

(altogether, the code should look like this):

        <!-- The real path to the jquery ui css... !-->
        <link href="css/jquery-ui.css" rel="stylesheet" type="text/css" />
        <input type="date" />
        <!-- Path to jquery & jquery ui !-->
        <script src="js/jquery.js"></script>
        <script src="js/jquery-ui.js"></script>
         if (! {
             $( "input[type=date]" ).datepicker();

Native datepicker in chrome:

native date input

Jquery ui datepicker in firefox:

jquery ui date picker

That’s all for now! (Should I blog about modernizr more…? Let me know on twitter (@errietta) :p)

‘Till next time!

by Errietta Kostala at April 17, 2014 05:08 PM

mrmist's blog

Telephoney Rant

This article is tagged with: ,

2 Months after submitting our “home move” order and almost 2 months since moving in to our new home, we have no phone line from BT. We’re incredibly lucky that the area is serviced by virgin cable, so we have managed to obtain alternative Internets, otherwise I dare say I would be apocalyptic with rage. As it is, I’m just on “simmer”, instead. I was actually moved to write a real letter to the company yesterday, after what was probably my fifth or sixth “update” – updates that is that don’t really update anything other than the next time that we’ll be called with an update. Pathetic.

by Mrmist at April 17, 2014 08:38 AM

April 16, 2014

RichiH's blog

secure password storage

Dear lazyweb,

for obvious reaons I am in the process of cycling out a lot of passwords.

For the last decade or so, I have been using openssl.vim to store less-frequently-used passwords and it's still working fine. Yet, it requires some manual work, not least of which manually adding random garbage at the start of the plain text (and in other places) every time I save my passwords. In the context of changing a lot of passwords at once, this has started to become tedious. Plus, I am not sure if a tool of the complexity and feature-set of Vim is the best choice for security-critical work on encrypted files.

Long story short, I am looking for alternatives. I did some research but couldn't come up with anything I truly liked; as there's bound to be tools which fit the requirements of like-minded people, I decided to ask around a bit.

My personal short-list of requirements is:

  • Strong crypto
  • CLI-based
  • Must add random padding at the front of the plain text and ideally in other places as well
  • Should ideally pad the stored file to a few kB so size-based attacks are foiled
  • Must not allow itself to be swapped out, etc
  • Must not be hosted, cloud-based, as-a-service, or otherwise compromised-by-default
  • Should offer a way to search in the decrypted plain text, nano- or vi-level of comfort are fine
  • Both key-value storage or just a large free-form text area would be fine with a slight preference for free-form text

Any and all feedback appreciated. Depending on the level of feedback, I may summarize my own findings and suggestions into a follow-up post.

by Richard &#x27;RichiH&#x27; Hartmann at April 16, 2014 06:47 AM

April 15, 2014

freenode staffblog


The recently exposed heartbleed bug in the OpenSSL library has surprised everyone with a catastrophic vulnerability in many of the world’s secure systems.

In common with many other SSL-exposed services, some freenode servers were running vulnerable versions of OpenSSL, exposing us to this exploit. Consequently, all of our affected services have been patched to mitigate the vulnerability, and we have also regenerated our private SSL keys and certificates.

In an unrelated event, due to service disruption & the misconfiguration of a single server on our network, an unauthorised user was allowed to use the ‘NickServ’ nickname for a short period Sunday morning. Unfortunately there is a possibility that your client sent data (including your freenode services password) to this unauthorised client. Identification via SASL, certfp or server password were not affected, but any password sent directly to the “NickServ” user might have been.

Because of these two recent issues, we would like to make the following recommendations to all of our users. It would also be good practice to follow them at regular intervals.

  • Though we are not aware of any evidence that we have been targeted, or our private key compromised, this is inevitably a possibility. SSL sessions established prior to 2014/04/12 may be vulnerable. If your current connection was established prior to this date via ssl then you should consider reconnecting to the network.
  • We would advise that users reset their password (after reconnecting) using instructions returned by the following command:

/msg nickserv help set password

This should help ensure that if your password was compromised through an exploitation of the Heartbleed vulnerability, the damage is limited.

  • In line with general best practice, we would always recommend using separate passwords on separate systems – if you shared your freenode services password with other systems, you should change your password on all of these systems; preferably into individual ones.
  • If you use CertFP, you should regenerate your client certificate (instructionsand ensure that you update NickServ with the new certificate hash. You can find out how to do this using the following command:

/msg nickserv help cert

  • Having changed passwords and/or certificate hashes, it cannot hurt to verify your other authentication methods (such as email, ACCESS or CERT). It is possible you have additional access methods configured either from past use or (less likely) due to an account compromise.
  • Finally, it is worth noting that although probably the least likely attack vector, Heartbleed can also be used as client-side attack, i.e. if you are still running a vulnerable client a server could attack you. This could be a viable attack if, for instance, you connect to a malicious IRC server and freenode at the same time; hypothetically the malicious IRC server could then attack your client and steal your IRC password or other data. If affected, you should ensure your OpenSSL install is updated and not vulnerable then restart your client.

As ever, staff are available in #freenode to respond to any questions or concerns.

by Pricey at April 15, 2014 07:35 PM

April 13, 2014

erry's blog

Using jquery to make all forms ajax-powered


I recently wrote a piece of hacky javascript to automatically make all my forms powered by AJAX + JSON. I still had to write functions to handle this JSON data, but it saved me time over retrieving values from the form and performing an AJAX request for every form. Plus, if I didn’t need to use JSON and just used DOM to change the innerHTML of something, I’d have saved more time yet.

This is my (commented!) javascript code (note that it requires jquery):

function getSubmits(form) {
    return $(form).children("input[type=submit],button[type=submit]");

//default json handler: just return a div with the result as its innerHTML
function default_json_handler(json) {
    var div = $('<div/>');
    div.html (json);
    return div;

function ajaxify() {
    var forms = $('form'); //ALL the forms!
    var sidx  = 0;


    forms.each (function(idx, form) {
        //We'll hold the handler function in the form's data-json-handler
        //attribute. window['function_name'] will tell us if the function
        var jsonh   = window[$(form).attr('data-json-handler')]
                        || default_json_handler;
        var submits = getSubmits(form);

        var method  = $(form).attr('method');
        var url     = $(form).attr('action');
            function(e) {
                //the data that we'll submit through ajax
                var params     = [];

                //This may be useful to let your script
                //know that the data is being posted through
                //AJAX; for example I use this to return JSON
                //instead of XML.
                params['ajax'] = [1];

                if(!$(this).attr('data-sidx')) {
                    $(this).attr('data-sidx', sidx++);

                //since we clicked a submit button, we'll be submitting
                //its value if it has a name...
                if (name = $(this).attr('name')) {
                    params[name] = [encodeURIComponent($(this).val())];

                //get the rest of the input values
                params  = getInputValues($(this), params);
                //get the request string
                params  = getRequestStr(params);

                //We use this ID so that we have a specific result div for
                //each of our submit buttons
                var div = $('#result' + $(this).attr('data-sidx'));

                if (!div[0]) {
                    div = $('<div/>',{id: 'result' + $(this).attr('sidx')})
                $.ajax(url, {
                  dataType  :'json',
                  method    :method,
                  data      :params,
                  success   :function(data) {
                    //call the json handler and append the result.
          , function() {
                        $('html, body').animate({
                            scrollTop: $(div).offset().top
                        }, 1000);
                  beforeSend :function() {
                        $("<img src='static/img/loading.gif' />")

                return false;

//gets a form's input values

function getInputValues(btn, params) {
    //get all the input values, but only checked
    //radios and checkboxes!
    var inputs = btn.parent().find(
        //all inputs that match the criteria
        'input[type!=submit]' +  // We've already handled this
            '[type!=radio]' +    // we handle this below 
            '[type!=checkbox]' + // likewise
            '[type!=button],' +  // their value isn't submitted...
                                 // unless they're submit buttons which
                                 // we've handled.

        ':checked,' +           // elements with the :checked state.
        'select,' +             // selects
        'textarea'              // textareas

    inputs.each(function(idx, input) {
        var name  = $(input).attr('name');
        var value = $(input).val();

        if (name) {
            if (!params[name]) {
                params[name] = []; // We use an array here so that we can handle
                                   // Multiple elements with the same name (checkboxes?)


    return params;

// gets the request string from an array of parameters.
function getRequestStr(params) {
    paramstr ='';

    for (var key in params) {
        for (var i in params[key]) {
            paramstr += (paramstr?'&':'') + key + '=' + params[key][i];

    return paramstr;

//calls the ajaxify function on document load

This is how our form looks like:

<!-- Pay attention to data-json-handler="result"! It's what we'll use
to handle the json result from ajax! !-->
<form method="post" action="script.php" data-json-handler="result">
    Text input:<br/>
    <input type="text" name="text" />

    Password input:<br/>
    <input type="password" name="password" />

    <textarea name="textarea"></textarea>

    Email input:<br/>
    <input type="email" name="email" />

    Date input:<br/>
    <input type="date" name="date" />

   <input type="radio" name="radio" value="1" /> Radio option 1

   <input type="radio" name="radio" value="2" /> Radio option 2

   <input type="radio" name="radio" value="3" /> Radio option 3


   <input type="checkbox" name="check[]" value="1" /> Checkbox option 1

   <input type="checkbox" name="check[]" value="2" /> Checkbox option 2

   <input type="checkbox" name="check[]" value="3" /> Checkbox option 3

<select name="select">
    <option value="">Please select...</option>
    <option value="Cow">Moo!</option>
    <option value="Pig">Oing oing!</option>
    <option value="Cat">Meow!</option>
    <option value="Dog">Woof!</option>
<input type="submit" name="submit" value="Submit with name!" />
<button type="submit" name="submit2" type="button" value="submit 2!">Submit &lt;button&gt; with name!</button>
<input type="submit" value="Submit without name!" />


<script src="jquery.js"></script>
<script src="ajaxify.js"></script>
<script src="jsonhandlers.js"></script>

And our jsonhandlers.js script will have the functions that actually handle the json result. For example:

function result(data) {

    var table = $("<table><tr><th>Name</th><th>Email</th>");
    for (var i = 0; i < data.length; i++) {
        table.append($("<tr><td>" + data[i].name + "</td><td>" + data[i].email + "</td></tr>"));
    return table;

Of course, script.php will have to return appropriate data. For this example, I just used:

     echo json_encode(
        array (
            array (
                'name' => 'User 1',
                'email' => '[email protected]'
            array (
                'name' => 'User 2',
                'email' => '[email protected]'

But in the real world you’d use the input to get data.

You can now see how the form works. You’ll notice when you submit it you get your results in a table, and if you inspect the HTTP request you should see all the data that is sent:


You can verify that all the input values are sent to the PHP page.

tl;dr: What our javascript code does is go through every form, and add a handler to the submit buttons. When they are clicked, it uses jquery to find the right elements and get their value. It then constructs a string to perform an AJAX request, including ajax=1 so that our server script knows it was submitted with ajax and act accordingly. After the result is acquired, the function set on the form’s data-json-handler attribute is called and it’s up to you how you handle the ajax.

If you want to change the JSON result behaviour just change the data-json-handler of each form!

That’s all for now. Hopefully this was helpful and not too confusing!

by Errietta Kostala at April 13, 2014 05:14 PM

April 02, 2014

freenode staffblog


UPDATE: This was of course an April Fool… you can “/msg nickserv set property GOOGLE+” to remove the property from your account. There might still be other secrets within the message though…


Edit: Previous versions of the post contained an incorrect NickServ command. We have corrected this and apologise for the inconvenience.

by Pricey at April 02, 2014 08:15 AM

April 01, 2014

Md's blog

Real out of band connectivity with network namespaces

This post explains how to configure on a Linux server a second and totally independent network interface with its own connectivity. It can be useful to access the server when the regular connectivity is broken.

This can happen thanks to network namespaces, a virtualization feature available in recent kernels.

We need to create a simple script to be run at boot time which will create and configure the namespace. First, move in the new namespace the network interface which will be dedicated to it:

ip netns add oob
ip link set eth2 netns oob

And then configure it as usual with iproute, by executing it in the new namespace with ip netns exec:

ip netns exec oob ip link set lo up
ip netns exec oob ip link set eth2 up
ip netns exec oob ip addr add dev eth2
ip netns exec oob ip route add default via

The interface must be configured manually because ifupdown does not support namespaces yet, and it would use the same /run/network/ifstate file which tracks the interfaces of the main namespace (this is also a good argument in favour of something persistent like Network Manager...).

Now we can start any daemon in the namespace, just make sure that they will not interfere with the on-disk state of other instances:

ip netns exec oob /usr/sbin/sshd -o PidFile=/run/

Netfilter is virtualized as well, so we can load a firewall configuration which will be applied only to the new namespace:

ip netns exec oob iptables-restore < /etc/network/firewall-oob-v4

As documented in ip-netns(8), iproute netns add will also create a mount namespace and bind mount in it the files in /etc/netns/$NAMESPACE/: this is very useful since some details of the configuration, like the name server IP, will be different in the new namespace:

mkdir -p /etc/netns/oob/
echo 'nameserver' > /etc/netns/oob/resolv.conf

If we connect to the second SSH daemon, it will create a shell in the second namespace. To enter the main one, i.e. the one used by PID 1, we can use a simple script like:

#!/bin/sh -e
exec nsenter --net --mount --target 1 "$@"

To reach the out of band namespace from the main one we can use instead:

#!/bin/sh -e
exec nsenter --net --mount --target $(cat /var/run/ "$@"

Scripts like these can also be used in fun ssh configurations like:

Host 10.2.1.*
 ProxyCommand ssh -q -a -x -N -T 'nsenter-main nc %h %p'

April 01, 2014 04:26 PM

February 20, 2014

Md's blog

Automatically unlocking xscreensaver in some locations

When I am at home I do not want to be bothered by the screensaver locking my laptop. To solve this, I use a custom PAM configuration which checks if I am authenticated to the local access point.

Add this to the top of /etc/pam.d/xscreensaver:

auth sufficient quiet /usr/local/sbin/pam_auth_xscreensaver

And then use a script like this one to decide when you want the display to be automatically unlocked:

#!/bin/sh -e

# return the ESSID of this interface
current_essid() {
  /sbin/iwconfig $1 | sed -nre '/ESSID/s/.*ESSID:"([^"]+)".*/\1/p'

# automatically unlock only for these users
case "$PAM_USER" in
  "")   echo "This program must be run by!"
        exit 1
  md)   ;;

  *)    exit 1

CURRENT_ESSID=$(current_essid wlan0)

# automatically unlock when connected to these networks
case "$CURRENT_ESSID" in
  MYOWNESSID) exit 0 ;;

exit 6

February 20, 2014 05:49 AM

February 04, 2014

freenode staffblog


As many of you will be aware, freenode has been experiencing intermittent instability today, as the network has been under attack. Whilst we have network services back online, the network continues to be a little unreliable and users are continuing to report issues in connecting to the network.

We appreciate the patience of our many wonderful users whilst we continue to work to mitigate the effects this has on the network.

We also greatly appreciate our many sponsors who work with us to help minimise the impact and who are themselves affected by attacks against the network.

We’ve posted on this subject before, and what we said then remains as true as ever – and for those of you who didn’t read the earlier blogpost first time round, it’s definitely worth perusing it now if this subject interests or affects you.

Thank you all for your patience as we continue to work to restore normal service!

[UPDATE 04/02/2014]

At the moment SASL authentication works only on PLAINTEXT, *not* BLOWFISH. We’ve checked and TOR should be working too. Sadly will be taken off the rotation, so those users who’ve connected specifically to it, please make sure that your client points to our recommended roundrobin of!

by njan at February 04, 2014 04:53 PM

January 30, 2014

Md's blog

On people totally opposed to systemd

Do you remember the very vocal people who, a decade ago, would endlessly argue that udev was broken and that they would never use it?

Percentage over time of systems on which udev is installed

Sometimes you can either embrace change or be dragged along by it. We are beyond the inflection point, and the systemd haters should choose their place.

January 30, 2014 05:15 AM

December 01, 2013

Md's blog

Easily installing Debian on a Cubieboard

I recently bought a Cubieboard to replace my old Sheevaplug which has finally blown a power supply capacitor (this appears to be a common defect of Sheevaplugs), so I am publishing these instructions which show how to install Debian on sunxi systems (i.e. based on the Allwinner A10 SoC or one of its newer versions) with no need for cross compilers, emulators or ugly FAT partitions.

This should work on any sunxi system as long as U-Boot is at least version 2012.10.

The first step is to erase the beginning of SD card to remove anything in the unpartitioned space which may confuse U-Boot, partition and format it as desired. The first partition must begin at 1MB (1024*1024/512=2048 sectors) because the leading unpartitioned space is used by the boot loaders.

dd if=/dev/zero of=/dev/mmcblk0 bs=1M count=1
parted /dev/mmcblk0

  mklabel msdos
  mkpart primary ext4 2048s 15G
  unit s
  mkpart primary linux-swap ... -1

mkfs.ext4 -L root /dev/mmcblk0p1
mkswap --label swap /dev/mmcblk0p2

Download the boot loaders and an initial kernel and install them:

tar xf cubieboard_hwpack.tar.xz
dd if=bootloader/sunxi-spl.bin of=/dev/mmcblk0 bs=1024 seek=8
dd if=bootloader/u-boot.bin of=/dev/mmcblk0 bs=1024 seek=32

mount /dev/mmcblk0p1 /mnt
mkdir /mnt/boot/

cp kernel/script.bin kernel/uImage /mnt/boot/

script.bin is Allwinner's proprietary equivalent of the device tree: it will be needed until sunxi support will be fully merged in mainline kernels.

U-Boot needs to be configured to load the kernel from the ext4 file system (join the lines at \\, this is not a supported syntax!):

cat << END > /mnt/boot/uEnv.txt
# kernel=uImage
root=/dev/mmcblk0p1 rootwait
boot_mmc=ext4load mmc 0:1 0x43000000 boot/script.bin && ext4load mmc 0:1 0x48000000 boot/${kernel} \\
  && watchdog 0 && bootm 0x48000000

Now the system is bootable: add your own root file system or build one with debootstrap. My old Sheevaplug tutorial shows how to do this without a working ARM system or emulator (beware: the other parts are quite obsolete and should not be trusted blindly).

If you have an old armel install around it will work as well, and you can easily cross-grade it to armhf as long as it is up to date to at least wheezy (the newer, the better).

You can also just use busybox for a quick test:

mkdir /mnt/bin/
dpkg-deb -x .../busybox-static_1.21.0-1_armhf.deb .
cp bin/busybox /mnt/bin/
ln -s busybox /mnt/bin/sh

After booting the busybox root file system you can run busybox --install /bin/ to install links for all the supported commands.

Until Debian kernels will support sunxi (do not hold your breath: there are still many parts which are not yet in mainline) I recommend to install one of Roman's kernels:

dpkg -i linux-image-3.4.67-r0-s-rm2+_3.4.67-r0-s-rm2+-10.00.Custom_armhf.deb
mkimage -A arm -O linux -T kernel -C none -a 40008000 -e 40008000 \
  -n uImage -d /boot/vmlinuz-3.4.67-r0-s-rm2+ /boot/uImage-3.4.67-r0-s-rm2+

It is not needed with these kernels for most setups, but an initramfs can be created with:

update-initramfs -c -k 3.4.67-r0-s-rm2+
mkimage -A arm -T ramdisk -C none -n uInitrd \
  -d /boot/initrd.img-3.4.67-r0-s-rm2+ /boot/uInitrd-3.4.67-r0-s-rm2+

/boot/uEnv.txt will have to be updated to load the initramfs.

Since the Cubieboard lacks a factory-burned MAC address you should either configure one in script.bin or (much easier) add it to /etc/network/interfaces:

iface eth0 inet dhcp
        hwaddress ether xx:xx:xx:xx:xx:xx

To learn more about the Allwinner SoCs boot process you can consult 1 and 2.

December 01, 2013 12:40 PM

November 08, 2013

mrmist's blog

Broadband Rant

This article is tagged with:

It’s hard to see how targets of 90% + coverage are going to be met in this country, when we can’t get fibre broadband even on our new housing estate in a redeveloping area.

The next street along has an upgraded cabinet, and one further down has an upgraded cabinet, but our cabinet remains outdated. I am told that our cabinet does not meet the “financial criteria” due to having too few houses connected. To me, it seems that hardly any similar cabinets would ever make the criteria – in other words, unless you happen to be extremely lucky, or you’re living in a city centre, you can forget it, regardless of promises to cover 90% of the UK.

I guess what that 90% figure means is that 90% of exchanges will be capable of providing fibre broadband – even if only 1/3 of the connected cabinets can.

It seems to me that having financial criteria from a single provider makes the aims of the project incompatible with the implementation.

by Mrmist at November 08, 2013 08:20 AM

November 02, 2013

Md's blog

New PGP key

Since my current PGP key is a 1024 bits DSA key generated in 1998, I decided that it is time to replace it with a stronger one: there are legitimate concerns that breaking 1024 bits DSA is well within the reach of major governments.

I have been holding out for the last year waiting for GnuPG 2.1, which will support elliptic curves cryptography, but I recently concluded that adopting ECC now would not be a good idea: Red Hat still does not fully support it due to unspecified patent concerns and there is no consensus in the cryptanalists community about the continued strength of (some?) ECC algorithms.

So I created three fancy keys: a 4096 bits main key for offline storage, which hopefully will be strong enough for a long time, and two 3072 bits subkeys for everyday use.

I have published a formal key transition statement and I will appreciate if people who have signed my old key will also sign the new one.

What follows are the instructions that I used to generate these PGP keys. They follow the current best practices and only reference modern software.

While the GnuPG defaults are usually appropriate, I think that it is a good idea to use a stronger hash for the key signatures of very long-lived keys. I could not find a simple way to "upgrade" the algorithm of key self signatures.

echo 'cert-digest-algo SHA256' >> ~/.gnupg/gpg.conf

First, generate a RSA/4096 sign only key, which will be your master key and may be stored offline. Then add to it two RSA/3072 subkeys (one sign only and one encrypt only):

# generate a RSA/4096 sign only key
gpg2 --gen-key
# add two RSA/3072 subkeys (sign only and encrypt only)
gpg2 --edit-key 8DC968B0

Since GnuPG lacks a command to remove the master secret key while keeping its secret subkeys, you need to delete the complete secret keys and then re-import only the subkeys:

gpg2 --export-secret-keys 8DC968B0 > backup.secret
gpg2 --export-secret-subkeys 8DC968B0 > backup.subkeys
gpg2 --delete-secret-key 8DC968B0
gpg2 --import backup.subkeys

Then you can import again the complete keys in a different secret keyring, which can be stored offline:

mkdir ~/.gnupg/master/
gpg2 --no-default-keyring \
  --keyring ~/.gnupg/pubring.gpg \
  --secret-keyring ~/.gnupg/master/secring.gpg \
  --import backup.secret

Now you can move ~/.gnupg/master/ to a USB stick. You are supposed to protect the master secret key with a strong passphrase, so there is no point in using block level encryption on the removable media.

Since you are only using the master key to sign other keys, it only needs to be configured as the second keyring in ~/.caffrc:

$CONFIG{'secret-keyring'} = $ENV{HOME} . '/.gnupg/master/secring.gpg';

It is also a good idea to have an hard copy backup of your keys, since the lifetime of USB sticks should not be trusted too much:

paperkey -v --output printable.txt --secret-key backup.secret
a2ps -2 --no-header -o printable.txt

Some references that I used:

November 02, 2013 11:03 PM

August 09, 2013

freenode staffblog

Reminder: Keep your NickServ email up to date.

If you’ve registered with NickServ within the last few years then you’ll have used an email address and we’ll have sent you a mail to verify it. That will probably be the last time you heard from us…

…until you forget your password and find yourself unable to identify to your account. When that happens we can send an email (only to that same address) to verify your identify and reset your password.

You aren’t stuck with the email you originally used though! We’d very strongly recommend you take 5 minutes to double check the set email address is current, especially in light of recent service closures. You don’t need access to your old inbox to change your registered email, just your NickServ password.

To view the current state of your account, while identified type:

/msg nickserv info

If you’d like to then change the registered email address, first…

/msg nickserv set email [email protected]

… then check your email inbox. We’ll have sent you another email with instructions to verify this new address.

Your email address is hidden from other users by default. You can ensure this by setting:

/msg nickserv set hidemail on

Thanks for using freenode!

by Pricey at August 09, 2013 09:18 AM

July 22, 2013

freenode staffblog

Server hosting and trust

For the purpose of disclosure we have had to make the difficult decision to discontinue a long-standing relationship with a server sponsor.

As a freenode user you may be aware that our set-up is somewhat untraditional and differs from that of many other IRC networks; servers are sponsored by various companies and educational institutions across the globe and all our infrastructure is centrally managed by the freenode infrastructure team. Generally speaking we do not provide o:lines or other privileges to server sponsors. Whilst it is possible for a sponsor contact to also volunteer as a staffer on the network such recruitment is independent of any server hosting.

Our staff are expected to work together closely and communication is key in any freenode relationship, be that with users, among staff or with sponsor contacts. It is important to us to be consistent in the way we provide support and apply policy and we expect all volunteers to be intimately familiar with our policies, procedures and philosophies — which in turn means that senior staff invest a lot of time in ensuring that any new recruits are given adequate support when getting to know the ins and outs of the network and what being a freenode volunteer entails.

Unfortunately one of our server sponsors added an o:line for themselves on the server they sponsored and whilst we do not believe that this was done with any malicious intent, more through thoughtlessness/negligence and having forgotten the expectations set out on our “Hosting a Server” page we feel that we are unable to comfortably and confidently continue the relationship.

Our number one priority has to be our target communities, the Free and Open Source Software communities that have chosen to make use of freenode in their internet activities.

Whilst we do not believe and have no evidence to indicate that any user traffic or data has been compromised, we would of course encourage you to change your passwords if you feel that this would make you more comfortable in continuing to use our services.

We can only apologise for this happening and we’d like to assure you that trust is incredibly important to us and that we are incredibly embarassed that this situation arose in the first place.

As a result of this we have just replaced our SSL certificates, so if you notice that these have changed then this is the reason why.

We will of course take this opportunity to remind all our sponsors of our expectations when it comes to providing services to freenode and our target communities.

Again, we apologise for any inconvenience and we hope that any loss of trust in the network that may have resulted from this incidence can be restored and that your projects will continue to feel comfortable using the network in future.



by christel at July 22, 2013 07:19 PM

mrmist's blog

Filter in the name of protection

I think it’s shocking that one of the central pillars of the concept of the Internet, free access to all things, is casually eroded by David Cameron in the name of “protecting the children”. This is appalling. Whilst I’m sure that this will give some poor quality parents an illusion of online safety, saving them from what must surely be a terrible chore of actually having to care about what their children are doing for themselves, filtering traffic by default is a massive blow to online freedoms. This will not make things better. This paves the way for the government to more fully dictate how and what we should view on the Internet in the future – after all, if the technical filters are already in place, why not just increase them a nudge to filter out more content that the government deems “unsuitable”? And, of course, the elephant in the room is that those people who do not have these filters activated, who choose instead to maintain real access to the Internet, will have suspicion cast upon them.

by Mrmist at July 22, 2013 07:27 AM

July 17, 2013

freenode staffblog

Fosscon, an open source conference in Philadelphia PA, Saturday August 10th

FOSSCON 2013 will be held on August 10th, 2013.  Several of our very own staff here at freenode will be attending this year and we are really looking forward to it.

FOSSCON was spawned from the depths of freenode and this will be the 4th event so far.

We are very excited about this year’s keynote speaker, Philadelphia’s own Jordan Miller, who leads a research team at The University of Pennsylvania. Jordan makes heavy use of open source software and is doing amazing work with 3D printing as it pertains to transplant organs. printed-vascular-networks-made-sugar.

Listed below is a just a quick peek at some of our confirmed speakers and their topics:

  • Bhavani Shankar will be speaking on how to bring in new developers to open source projects.
  • Elizabeth Krumbach Joseph will be speaking on Open Source Systems Administration.
  • Corey Quinn will be speaking on configuration management with Salt.
  • Brent Saner will be speaking on Project.Phree, a wireless mesh project.
  • Dru Lavigne will be speaking on FreeNAS 9.1.
  • Jérôme Jacovella-St-Louis will be hosting a workshop on cross-platform development with the Ecere SDK.
  • John Ashmead will be speaking on the math and science of invisibility.
  • John Stumpo will be offering a workshop on the Challenges facing FOSS game projects.
  • Walt Mankowski will be speaking on Scientific Programming with NumPy and SciPy.
  • Chris Nehren will be speaking on bridging the gap between development and operations.
  • Christina Simmons will be speaking on starting and managing open source events/projects.
  • Hector Castro will be offering a hands-on workshop on the Riak database engine.
  • Dan Langille will be hosting a workshop on Bacula: The Networked Backup Open Source Solution

If you haven’t registered yet, please do so here:!  We’ve had such an awesome response so far and are so excited to see how far we can go this year! Invite your friends, your partners, your business associates, and everyone else you know!  We’ll see you soon!

by JonathanD at July 17, 2013 09:51 PM

June 07, 2013

Md's blog

Torre Telecom Italia, Rozzano

Today I was lucky enough to be able to visit the Telecom Italia telecommunications tower located in Rozzano, just south of Milano, and took some photos.

This tower, with its 187 meters, is one of the tallest man-made structures in Italy.

It was built by Telecom Italia in 1990 to create high capacity radio links to Genova and Torino and nowadays it contains radio transmitters for a TV station and many kinds of radio networks.

It is an impressive monument to an age when telcos had no optical fibers, but plenty of money.

June 07, 2013 02:09 AM

January 11, 2013

mquin's blog

Notes and iCloud

One of the tools included as part of iOS6 and OS 10.8 is a simple but useful note taking app, unsurprisingly named 'Notes', which is also part of the iCould service and can be synced between devices over the internet.

I've used Notes lightly since it appears, primary to jot things down when I'm travelling (I used it a lot at Worldcon), and as a shared clipboard to move URLs and small snippets of text between my Mac and the iPad.

What isn't readily obvious when using the application in either guise, or the iCloud web service, is how the notes are stored.

It becomes apparent, however, if you use a third-party mail client to access your iCloud email account. There is a 'Notes' folder, hidden when viewed in, which contains, as you might expect, your notes in standard e-mail message form.

Okay, so we can read notes over IMAP, can we write them? Editing existing ones works as expected, but just saving a new email message into the folder doesn't - the message is visible to the IMAP client, but note to the Notes app.

So lets look a little closer at the headers on the notes from Notes:

Subject: An uninteresting note
From: Me <[email protected]>
Content-Type: text/html;
Message-Id: <[email protected]>
Date: Wed, 25 Jul 2012 23:43:09 +0100
X-Mail-Created-Date: Wed, 25 Jul 2012 23:43:09 +0100
Content-Transfer-Encoding: quoted-printable
Mime-Version: 1.0 (1.0)

Mostly what I would expect, apart from the X-Universally-Unique-Identifier and X-Uniform-Type-Identifier headers, which turn out to be the magic trick. Create a new message with those headers (with a new UUID in the unique-identifier one), and hey presto, it appears on iCloud and in Notes.

With a little bit of help from offlineimap and some shell glue it is not particularly hard to use this mechanism to create new notes, or edit existing ones, from the command line.

January 11, 2013 06:17 PM

October 31, 2012

mquin's other blog


With the increasing amount of tumbleweed around here I've decided to draw a line under my journal. My infrequent ramblings will likely go to or the usual social media places in future.

October 31, 2012 04:33 PM

mquin's blog

A small wireless sensor network

Back in the dim and distant I wrote about collecting electricity consumption data. In the intervening time and due to some hardware changes and failures I had stopped doing this.

Recently, Graeme Thomson gave a talk at ScotLUG about the system he is using to monitor temperatures around his house, using a 1-wire sensor network. Inspired by this, I decided to revisit my monitoring efforts.

Graeme's system took advantage of the fact that his house had recently been rewired and at that time he had laid in a number of twisted pair drops into each room, which could easily be patched onto his 1-wire bus.

Not wanting to run more cable around my own flat I decided to look at the possibility of doing the same thing wirelessly, and I remembered, from my Arduino tinkering, about the JeeNode project.

JeeNodes are compact, low-power Arduino-compatible AVR microcontroller boards with onboard wireless modules. They are very versatile and ideal as a basis for all sorts of wireless sensor nodes. They are also relatively inexpensive, particularly when purchased in kit form.

After a bit of tinkering around I settled on a sensor node design consisting of a JeeNode with an AA battery based power supply, and a DS18B20 digital temperature sensor.

Assembled sensor node

I now have four of these scattered around the flat, running a simple sketch that takes a sample from the temperature sensor every minute or so and transmits it back to my central server. The 878Mhz radio system seems to comfortably cover the entire building.

Using one JeeNode per sensor is not the cheapest way of doing this, but saves me pulling cable and leaves me with a lot of flexibility should I wish to expand this system or monitor additional parameters in future.

The end result: pretty graphs, and a better understanding of how the temperature in my flat changes over time.

24 hour temperature graph

Ideas that I have for the future include replacing the node near my server with a Arduino Nano, rather than using wireless to span half a metre, and reuse the JeeNode in another room.

October 31, 2012 04:14 PM

Moving On

With the current goings on over at LiveJournal, and the realization that I'm not really paying much attention to it any more I've decided to move my meagre creative output over to ikiwiki here. My old LJ posts will remain where they are, but I won't be updating it further.

October 31, 2012 02:40 PM

June 28, 2012

mrmist's blog

Think of the children!

This article is tagged with:

I’m highly amused that the government are selling the latest bundle of internet censorship as a child protection measure. Apparently all our Internets should be filtered by default because that makes it safer for children. Notice that this is billed as some kind of anti-porn filter, but the actual block is against “harmful content”, which could essentially be “whatever the government wants to block by default.” The reason that we need this seems to be that “Growing numbers of parents do not feel in control of what their families are exposed to online”. Apparently actually watching what your kids are doing and engaging with them so that they don’t need a nannying internet filter is too much effort for the modern family.

by Mrmist at June 28, 2012 07:13 AM

May 24, 2012

mquin's other blog

Being Digital

Something I've not really spoken about here yet but which has been taking up a lot of my time over these last months is study.

I decided last year to start working towards a degree part-time with the Open University. The first stage of this has been a six-month computing and information technology module entitled "My Digital Life".

Going back to school, as it were, well over a decade since I finished college has been a bit of a change. The course itself has been very engaging, doing a good job of mixing a fairly broad range of topics from the early history of computing to more modern areas such as social networking and wireless sensor networks, with research and study skills.

As you might expect, the OU - established as a distance learning university - has embraced technology for education, and the tools used to deliver the course were a great help.

Coming to the end of this module I'm fairly confident that I have done well, and although I feel I'm still developing in terms of being able to think and write like an academic I have found it to be very enjoyable.

May 24, 2012 09:30 AM

90 days

A little late, and this should be the last of these posts for a while - don't want to bore you all.

85kgs seems to be the awkward point, with my weight floating around that mark for the last month or so. I'm fairly happy with the results and I've been starting with a bit of strength work in the last few weeks.

As the weather is getting better I'm hoping to spend more time on the bike as well.

Small moves, but it seems to be working.

May 24, 2012 09:19 AM