Planet freenode

April 18, 2014

RichiH's blog

higher security

Instant classic

Trusted:

NO, there were errors:
The certificate does not apply to the given host
The certificate authority's certificate is invalid
The root certificate authority's certificate is not trusted for this purpose
The certificate cannot be verified for internal reasons

Signature Algorithm: md5WithRSAEncryption
    Issuer: C=XY, ST=Snake Desert, L=Snake Town, O=Snake Oil, Ltd, OU=Certificate Authority, CN=Snake Oil CA/emailAddress=[email protected]
    Validity
        Not Before: Oct 21 18:21:51 1999 GMT
        Not After : Oct 20 18:21:51 2001 GMT
    Subject: C=XY, ST=Snake Desert, L=Snake Town, O=Snake Oil, Ltd, OU=Webserver Team, CN=www.snakeoil.dom/emailAddress=[email protected]
...
            X509v3 Subject Alternative Name: 
            email:[email protected]

For your own pleasure:

openssl s_client -connect www.walton.com.tw:443 -showcerts

or just run

echo '
-----BEGIN CERTIFICATE-----
MIIDNjCCAp+gAwIBAgIBATANBgkqhkiG9w0BAQQFADCBqTELMAkGA1UEBhMCWFkx
FTATBgNVBAgTDFNuYWtlIERlc2VydDETMBEGA1UEBxMKU25ha2UgVG93bjEXMBUG
A1UEChMOU25ha2UgT2lsLCBMdGQxHjAcBgNVBAsTFUNlcnRpZmljYXRlIEF1dGhv
cml0eTEVMBMGA1UEAxMMU25ha2UgT2lsIENBMR4wHAYJKoZIhvcNAQkBFg9jYUBz
bmFrZW9pbC5kb20wHhcNOTkxMDIxMTgyMTUxWhcNMDExMDIwMTgyMTUxWjCBpzEL
MAkGA1UEBhMCWFkxFTATBgNVBAgTDFNuYWtlIERlc2VydDETMBEGA1UEBxMKU25h
a2UgVG93bjEXMBUGA1UEChMOU25ha2UgT2lsLCBMdGQxFzAVBgNVBAsTDldlYnNl
cnZlciBUZWFtMRkwFwYDVQQDExB3d3cuc25ha2VvaWwuZG9tMR8wHQYJKoZIhvcN
AQkBFhB3d3dAc25ha2VvaWwuZG9tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKB
gQC554Ro+VH0dJONqljPBW+C72MDNGNy9eXnzejXrczsHs3Pc92Vaat6CpIEEGue
yG29xagb1o7Gj2KRgpVYcmdx6tHd2JkFW5BcFVfWXL42PV4rf9ziYon8jWsbK2aE
+L6hCtcbxdbHOGZdSIWZJwc/1Vs70S/7ImW+Zds8YEFiAwIDAQABo24wbDAbBgNV
HREEFDASgRB3d3dAc25ha2VvaWwuZG9tMDoGCWCGSAGG+EIBDQQtFittb2Rfc3Ns
IGdlbmVyYXRlZCBjdXN0b20gc2VydmVyIGNlcnRpZmljYXRlMBEGCWCGSAGG+EIB
AQQEAwIGQDANBgkqhkiG9w0BAQQFAAOBgQB6MRsYGTXUR53/nTkRDQlBdgCcnhy3
hErfmPNl/Or5jWOmuufeIXqCvM6dK7kW/KBboui4pffIKUVafLUMdARVV6BpIGMI
5LmVFK3sgwuJ01v/90hCt4kTWoT8YHbBLtQh7PzWgJoBAY7MJmjSguYCRt91sU4K
s0dfWsdItkw4uQ==
-----END CERTIFICATE-----
' | openssl x509 -noout -text

At least they're secure against heartbleed.

by Richard 'RichiH' Hartmann at April 18, 2014 10:22 AM

April 17, 2014

erry's blog

Fallback for HTML5 date input

HTML5 is awesome. It gives us so many things that we previously had to do manually! Unfortunately, not all browsers support it yet.
Personally, I’m eager to use all the new features, but I don’t want to sacrifice browser support. For example, I don’t want to use jquery ui to render a date picker if a browser supports html5 <input type=”date”>.
Fortunately, you can check if a browser supports the HTML5 way, and use Jquery UI even if it doesn’t! I used Modernizr for this, and it’s quite awesome. As you can see, there are many many things it can detect, but for this particular instance, you can build a bundle with only “input types” (I might do a more extensive post on Modernizr later… hmmm).

First of all, build a Modernizr bundle with the options you want (or just “input types” like me) and download the resulting .js file. Assuming you already have jquery ui installed, your code should look like this

<html>
    <head>
        <title></title>
        <!-- The real path to the jquery ui css... !-->
        <link href="css/jquery-ui.css" rel="stylesheet" type="text/css" />
    </head>
    <body>
        <input type="text" class="date" />
        <!-- Path to jquery & jquery ui !-->
        <script src="js/jquery.js"></script>
        <script src="js/jquery-ui.js"></script>
        <script>
            $( ".date" ).datepicker();
        </script>
    </body>
</html>

Which will work fine, but I still prefer the native datepicker where available. This is where Modernizr comes in handy! First of all, load the Modernizr script as well.

        <script src="js/modernizr.js"></script>

You can now use the date input type like you usually would:

        <input type="date" />

As long as you also add this somewhere in your javascript code:

<script>
if (!Modernizr.inputtypes.date) {
    $( "input[type=date]" ).datepicker();
}
</script>

That easy! Now if the date input type isn’t supported, all your <input type=”date”> will automatically use the jquery ui datepicker. Additionally, you will automatically have the native solution when these browsers start supporting it, without you having to change anything else in the future.

(altogether, the code should look like this):

<html>
    <head>
        <title></title>
        <!-- The real path to the jquery ui css... !-->
        <link href="css/jquery-ui.css" rel="stylesheet" type="text/css" />
    </head>
    <body>
        <input type="date" />
        <!-- Path to jquery & jquery ui !-->
        <script src="js/jquery.js"></script>
        <script src="js/jquery-ui.js"></script>
         <script>
         if (!Modernizr.inputtypes.date) {
             $( "input[type=date]" ).datepicker();
         }
         </script>
    </body>
</html>



Native datepicker in chrome:

native date input


Jquery ui datepicker in firefox:

jquery ui date picker

That’s all for now! (Should I blog about modernizr more…? Let me know on twitter (@errietta) :p)

‘Till next time!

by Errietta Kostala at April 17, 2014 05:08 PM

mrmist's blog

Telephoney Rant


This article is tagged with: ,

2 Months after submitting our “home move” order and almost 2 months since moving in to our new home, we have no phone line from BT. We’re incredibly lucky that the area is serviced by virgin cable, so we have managed to obtain alternative Internets, otherwise I dare say I would be apocalyptic with rage. As it is, I’m just on “simmer”, instead. I was actually moved to write a real letter to the company yesterday, after what was probably my fifth or sixth “update” – updates that is that don’t really update anything other than the next time that we’ll be called with an update. Pathetic.

by Mrmist at April 17, 2014 08:38 AM

April 16, 2014

RichiH's blog

secure password storage

Dear lazyweb,

for obvious reaons I am in the process of cycling out a lot of passwords.

For the last decade or so, I have been using openssl.vim to store less-frequently-used passwords and it's still working fine. Yet, it requires some manual work, not least of which manually adding random garbage at the start of the plain text (and in other places) every time I save my passwords. In the context of changing a lot of passwords at once, this has started to become tedious. Plus, I am not sure if a tool of the complexity and feature-set of Vim is the best choice for security-critical work on encrypted files.

Long story short, I am looking for alternatives. I did some research but couldn't come up with anything I truly liked; as there's bound to be tools which fit the requirements of like-minded people, I decided to ask around a bit.

My personal short-list of requirements is:

  • Strong crypto
  • CLI-based
  • Must add random padding at the front of the plain text and ideally in other places as well
  • Should ideally pad the stored file to a few kB so size-based attacks are foiled
  • Must not allow itself to be swapped out, etc
  • Must not be hosted, cloud-based, as-a-service, or otherwise compromised-by-default
  • Should offer a way to search in the decrypted plain text, nano- or vi-level of comfort are fine
  • Both key-value storage or just a large free-form text area would be fine with a slight preference for free-form text

Any and all feedback appreciated. Depending on the level of feedback, I may summarize my own findings and suggestions into a follow-up post.

by Richard &#x27;RichiH&#x27; Hartmann at April 16, 2014 06:47 AM

April 15, 2014

freenode staffblog

Heartbleed

The recently exposed heartbleed bug in the OpenSSL library has surprised everyone with a catastrophic vulnerability in many of the world’s secure systems.

In common with many other SSL-exposed services, some freenode servers were running vulnerable versions of OpenSSL, exposing us to this exploit. Consequently, all of our affected services have been patched to mitigate the vulnerability, and we have also regenerated our private SSL keys and certificates.

In an unrelated event, due to service disruption & the misconfiguration of a single server on our network, an unauthorised user was allowed to use the ‘NickServ’ nickname for a short period Sunday morning. Unfortunately there is a possibility that your client sent data (including your freenode services password) to this unauthorised client. Identification via SASL, certfp or server password were not affected, but any password sent directly to the “NickServ” user might have been.

Because of these two recent issues, we would like to make the following recommendations to all of our users. It would also be good practice to follow them at regular intervals.

  • Though we are not aware of any evidence that we have been targeted, or our private key compromised, this is inevitably a possibility. SSL sessions established prior to 2014/04/12 may be vulnerable. If your current connection was established prior to this date via ssl then you should consider reconnecting to the network.
  • We would advise that users reset their password (after reconnecting) using instructions returned by the following command:

/msg nickserv help set password

This should help ensure that if your password was compromised through an exploitation of the Heartbleed vulnerability, the damage is limited.

  • In line with general best practice, we would always recommend using separate passwords on separate systems – if you shared your freenode services password with other systems, you should change your password on all of these systems; preferably into individual ones.
  • If you use CertFP, you should regenerate your client certificate (instructionsand ensure that you update NickServ with the new certificate hash. You can find out how to do this using the following command:

/msg nickserv help cert

  • Having changed passwords and/or certificate hashes, it cannot hurt to verify your other authentication methods (such as email, ACCESS or CERT). It is possible you have additional access methods configured either from past use or (less likely) due to an account compromise.
  • Finally, it is worth noting that although probably the least likely attack vector, Heartbleed can also be used as client-side attack, i.e. if you are still running a vulnerable client a server could attack you. This could be a viable attack if, for instance, you connect to a malicious IRC server and freenode at the same time; hypothetically the malicious IRC server could then attack your client and steal your IRC password or other data. If affected, you should ensure your OpenSSL install is updated and not vulnerable then restart your client.

As ever, staff are available in #freenode to respond to any questions or concerns.

by Pricey at April 15, 2014 07:35 PM

April 14, 2014

RichiH's blog

git-annex corner case: Changing commit messages retroactively and after syncing

This is half a blog post and half a reminder for my future self.

So let's say you used the following commands:

git add foo
git annex add bar
git annex sync
# move to different location with different remotes available
git add quux
git annex add quuux
git annex sync

what I wanted to happen was to simply sync the already committed stuff to the other remotes. What happened instead was git annex sync's automagic commit feature (which you can not disable, it seems) doing its job: Commit what was added earlier and use "git-annex automatic sync" as commit message.

This is not a problem in and as of itself, but as this is my my master annex and as I managed to maintain clean commit messages for the last few years, I felt the need to clean this mess up.

Changing old commit messages is easy:

git rebase --interactive HEAD~3

pick the r option for "reword" and amend the two commit messages. I did the same on my remote and all the branches I could find with git branch -a. Problem is, git-annex pulls in changes from refs which are not shown as branches; run git annex sync and back are the old commits along with a merge commit like an ugly cherry on top. Blegh.

I decided to leave my comfort zone and ended up with the following:

# always back up before poking refs
git clone --mirror repo backup

git reset --hard 1234
git show-ref | grep master
# for every ref returned, do:
  git update-ref $ref 1234

rinse repeat for every remote, git annex sync, et voilà. And yes, I avoided using an actual loop on purpose; sometimes, doing things slowly and by hand just feels safer.

For good measure, I am running

git fsck && git annex fsck

on all my remotes now, but everything looks good up to now.

by Richard &#x27;RichiH&#x27; Hartmann at April 14, 2014 10:46 PM

April 13, 2014

erry's blog

Using jquery to make all forms ajax-powered

Hi,

I recently wrote a piece of hacky javascript to automatically make all my forms powered by AJAX + JSON. I still had to write functions to handle this JSON data, but it saved me time over retrieving values from the form and performing an AJAX request for every form. Plus, if I didn’t need to use JSON and just used DOM to change the innerHTML of something, I’d have saved more time yet.

This is my (commented!) javascript code (note that it requires jquery):

function getSubmits(form) {
    return $(form).children("input[type=submit],button[type=submit]");
}

//default json handler: just return a div with the result as its innerHTML
function default_json_handler(json) {
    var div = $('<div/>');
    div.html (json);
    return div;
}

function ajaxify() {
    var forms = $('form'); //ALL the forms!
    var sidx  = 0;

    forms.unbind();

    forms.each (function(idx, form) {
        //We'll hold the handler function in the form's data-json-handler
        //attribute. window['function_name'] will tell us if the function
        //exists
        var jsonh   = window[$(form).attr('data-json-handler')]
                        || default_json_handler;
        var submits = getSubmits(form);
        submits.unbind();

        var method  = $(form).attr('method');
        var url     = $(form).attr('action');

        submits.click(
            function(e) {
                //the data that we'll submit through ajax
                var params     = [];

                //This may be useful to let your script
                //know that the data is being posted through
                //AJAX; for example I use this to return JSON
                //instead of XML.
             
                params['ajax'] = [1];

                if(!$(this).attr('data-sidx')) {
                    $(this).attr('data-sidx', sidx++);
                }

                //since we clicked a submit button, we'll be submitting
                //its value if it has a name...
                if (name = $(this).attr('name')) {
                    params[name] = [encodeURIComponent($(this).val())];
                }

                //get the rest of the input values
                params  = getInputValues($(this), params);
                //get the request string
                params  = getRequestStr(params);

                //We use this ID so that we have a specific result div for
                //each of our submit buttons
                var div = $('#result' + $(this).attr('data-sidx'));

                if (!div[0]) {
                    div = $('<div/>',{id: 'result' + $(this).attr('sidx')})
                    .appendTo(form);
                }
               
                $.ajax(url, {
                  dataType  :'json',
                  method    :method,
                  data      :params,
                  success   :function(data) {
                    div.empty();
                    div.hide();
                    //call the json handler and append the result.
                    div.append(jsonh(data));
                    div.fadeIn(800);
                    div.show(500, function() {
                        $('html, body').animate({
                            scrollTop: $(div).offset().top
                        }, 1000);
                    });
                  },
                  beforeSend :function() {
                    div.empty()
                    div.append(
                        $("<img src='static/img/loading.gif' />")
                    );
                  }
                });

                e.stopPropagation();
                e.preventDefault();
                return false;
            }
        );
    });
}

//gets a form's input values

function getInputValues(btn, params) {
    //get all the input values, but only checked
    //radios and checkboxes!
    var inputs = btn.parent().find(
        //all inputs that match the criteria
        'input[type!=submit]' +  // We've already handled this
            '[type!=radio]' +    // we handle this below 
            '[type!=checkbox]' + // likewise
            '[type!=button],' +  // their value isn't submitted...
                                 // unless they're submit buttons which
                                 // we've handled.

        ':checked,' +           // elements with the :checked state.
        'select,' +             // selects
        'textarea'              // textareas
    );

    inputs.each(function(idx, input) {
        var name  = $(input).attr('name');
        var value = $(input).val();

        if (name) {
            if (!params[name]) {
                params[name] = []; // We use an array here so that we can handle
                                   // Multiple elements with the same name (checkboxes?)
            }

            params[name].push(encodeURIComponent(value));
        }
    });

    return params;
}

// gets the request string from an array of parameters.
function getRequestStr(params) {
    paramstr ='';

    for (var key in params) {
        for (var i in params[key]) {
            paramstr += (paramstr?'&':'') + key + '=' + params[key][i];
        }
    }

    return paramstr;
}

//calls the ajaxify function on document load
$(document).ready(ajaxify);

This is how our form looks like:

<!-- Pay attention to data-json-handler="result"! It's what we'll use
to handle the json result from ajax! !-->
<form method="post" action="script.php" data-json-handler="result">
<label>
    Text input:<br/>
    <input type="text" name="text" />
</label><br/>

<label>
    Password input:<br/>
    <input type="password" name="password" />
</label><br/>

<label>
    Textarea:<br/>
    <textarea name="textarea"></textarea>
</label><br/>

<label>
    Email input:<br/>
    <input type="email" name="email" />
</label><br/>

<label>
    Date input:<br/>
    <input type="date" name="date" />
</label><br/>


<label>
   <input type="radio" name="radio" value="1" /> Radio option 1
</label><br/>

<label>
   <input type="radio" name="radio" value="2" /> Radio option 2
</label><br/>

<label>
   <input type="radio" name="radio" value="3" /> Radio option 3
</label><br/>


Checkboxes:<br/>

<label>
   <input type="checkbox" name="check[]" value="1" /> Checkbox option 1
</label><br/>

<label>
   <input type="checkbox" name="check[]" value="2" /> Checkbox option 2
</label><br/>

<label>
   <input type="checkbox" name="check[]" value="3" /> Checkbox option 3
</label><br/>

Select:<br/>
<select name="select">
    <option value="">Please select...</option>
    <option value="Cow">Moo!</option>
    <option value="Pig">Oing oing!</option>
    <option value="Cat">Meow!</option>
    <option value="Dog">Woof!</option>
</select>
<br>
<input type="submit" name="submit" value="Submit with name!" />
<button type="submit" name="submit2" type="button" value="submit 2!">Submit &lt;button&gt; with name!</button>
<input type="submit" value="Submit without name!" />

</form>

<script src="jquery.js"></script>
<script src="ajaxify.js"></script>
<script src="jsonhandlers.js"></script>

And our jsonhandlers.js script will have the functions that actually handle the json result. For example:

function result(data) {

    var table = $("<table><tr><th>Name</th><th>Email</th>");
    for (var i = 0; i < data.length; i++) {
        table.append($("<tr><td>" + data[i].name + "</td><td>" + data[i].email + "</td></tr>"));
    }
    
    return table;
}

Of course, script.php will have to return appropriate data. For this example, I just used:

<?php
     echo json_encode(
        array (
            (object)
            array (
                'name' => 'User 1',
                'email' => '[email protected]'
            ),
            (object)
            array (
                'name' => 'User 2',
                'email' => '[email protected]'
            ),
        )
    );
?>

But in the real world you’d use the input to get data.

You can now see how the form works. You’ll notice when you submit it you get your results in a table, and if you inspect the HTTP request you should see all the data that is sent:

form

You can verify that all the input values are sent to the PHP page.

tl;dr: What our javascript code does is go through every form, and add a handler to the submit buttons. When they are clicked, it uses jquery to find the right elements and get their value. It then constructs a string to perform an AJAX request, including ajax=1 so that our server script knows it was submitted with ajax and act accordingly. After the result is acquired, the function set on the form’s data-json-handler attribute is called and it’s up to you how you handle the ajax.

If you want to change the JSON result behaviour just change the data-json-handler of each form!

That’s all for now. Hopefully this was helpful and not too confusing!

by Errietta Kostala at April 13, 2014 05:14 PM

April 02, 2014

erry's blog

GIMP tutorial – Designing freenode’s April fool’s blog post

Note: This is a (late) April fool’s post. It’s not to be taken seriously.

Hi,

So I’m a member of freenode‘s volunteer staff. I mostly deal with user support tasks and development, but I sometimes volunteer on other tasks as well. In this case, I took on the task of helping write our April fool’s blog post. I created the image that you see in the blog post.

I will now teach you how to create an image so horrible it can be used in freenode’s April fools blog post, too!
Open GIMP and create a 2648 x 852 image. The hugeness of it contributes to it looking like a scanned newspaper release.

We will start with the easy elements. Grab the text tool and type ‘For immediate release: Tuesday, April 1, 2014′ in 20pt Comic Sans MS. put the text on the top left of the image, with some margin. Then, drag a guide from the left ruler to where the text is, so that the rest of our elements will be aligned well.

You should now get something like this:

step1

Next, create the freenode logo in Google font. Luckily, there are web tools that do this; we will be using http://myog.searchmyway.com/?name=freenode

Take a screenshot of that logo, and open the screenshot in GIMP. If necessary, crop the logo to size so it has no extra white space; you can do this with Image -> Autocrop image. Now, copy your logo and go back to your original image. Make sure you have the ‘layers’ panel up; if not it’s in Window -> Dockable Dialogues -> Layers, or ctrl + L. On that window, the bottom left button creates a new layer. Click on it, then paste your logo. Then go go layer -> Anchor Layer, or press ctrl + H. If you did this right, you should have your logo in its own layer. You can give it a name, such as ‘logo’ to make things easier. Also, go to layer -> autocrop layer so that the layer is the right size.

You should have something like this at this point.

2

Now here’s the tricky part: we want to insert our crypto’s challenge first clue, ‘IyMjI3hrY2Q=’ into the ‘o’ in ‘freenode’. To do that, we need to stretch our logo out enough. First, use the text tool to actually write ‘IyMjI3hrY2Q=’ in monospace font and size 18. Place it anywhere for now. Now, drag your logo layer near the date, with some margin, and start scaling it proportionately (select the scale tool, and make sure the little chain icon is pressed) until it fills the whole image width. After that, select the scale tool again, but this time make sure the icon is not active (so press it again). Finally, reduce the layer height until it’s around 240 pixels. It’s okay if it’s slightly more or slightly less.
You can now drag your text layer (the IyMjI3hrY2Q= one) to be inside of the ‘o’.

At this point, you should have something like this:

3

Now the hard part is done. For the next step, we will add the main text. Select the text tool, and click on near your guide (and with some margin under the logo) and start dragging the text tool until it takes almost full width. Copy and paste the following text:

It is with great pleasure that we announce the integration between our network services and Google+. We have been working very closely with Google over the past year trying to implement Google+ into our network and would like to thank them for partnering with us on this initiative as they seek to drive adoption of Google+. We feel our respective missions are synergistically aligned and we look forward to collaborating further. Email verification will shortly be deprecated and will be phased out in the near future, to be replaced by a requirement for a Google+ profile.

To apply for the beta test of the integration you can simply send:
/msg NickServ SET PROPERTY GOOGLE+ ON

Once accepted into the beta, a welcome email will appear in your inbox hyperlinking to a +freenode page in your google account settings. Following the easy prompts will grant you the ability to turn on authentication passthrough. A dedicated GooServ will be loaded shortly as features mature, including the inevitable amalgamation into hangouts. Please stay tuned!

Please note, we have yet to load a new version of the help documentation that contains the Google+ integration steps into NickServ.

If you need any further assistance please don’t hesitate to join #freenode-google+ and ask your questions.

If the text is not in the right place, feel free to move the layer with the arrow keys. Hold shift for bigger movements. What you’re aiming for, is something like this.

4

Use the text tool again, but this time right under the freenode logo. Write ‘Official staff newspaper’ in size 19 bold Comic Sans MS Bold. You can drag a guide across to where the logo is to make sure you have placed your text in the right place. Once again, do feel free to move the text after you’ve created it, if needed.

5

Now for the last part, the pony image. We will steal this from our April 2012 blog post image, http://blog.freenode.net/wp-content/uploads/2012/04/ITucplOwnTShozIfVT1cM2u0VTWyVPZwp3EupaD.jpg. Simply bring in that image to GIMP and crop out the pony logo. Then, copy that image, and go back to your original. Make a new layer once again, paste the pony, and anchor the layer, like we did a while ago. You can also layer->autocrop layer if you want. Now, move the layer so that it touches the first guide, and leave some margin from the bottom, like so:

6

Congratulations, the hard work is done! Save your work, and export it as a png. (File->Export as…)

Now you may have noticed that the blog post image looks more like a scanned document. It’s ok, I’ll show you how to do this, too. It’s actually very easy, thanks to http://tex.stackexchange.com/questions/94523/simulate-a-scanned-paper.

Simply replace the command there to suit your filenames:

convert freenode.png \( +clone -blur 0×1 \) +swap -compose divide -composite -gamma 0.1 -linear-stretch 5%x0% -rotate 1.5 as-scanned.png

Note: the ‘convert’ command is part of the ‘imagemagick’ package.
And you should have your final result:

as-scanned

However, it seems that that image was too big for the blog.
Oh dear! But you can probably tell how I made the final image out of that: simply increase the canvas height as much as needed, and drag the text box to take over that area. Afterwards, use guides to move elements around as needed (you made layers, right!?). Because we need to resize the logo, I decided to move the hint, too.
You’ll easily get this:

10

That’s it, you’ve created an official page of the freenode staff newspaper, and made it look like it was scanned. Congratulations!

(Once again, this post, and of course the original one on blog.freenode.net is a bit of an April fool’s joke. My design skills aren’t normally THAT bad :D)

by Errietta Kostala at April 02, 2014 10:00 AM

freenode staffblog

+freenode

UPDATE: This was of course an April Fool… you can “/msg nickserv set property GOOGLE+” to remove the property from your account. There might still be other secrets within the message though…

freenode4

Edit: Previous versions of the post contained an incorrect NickServ command. We have corrected this and apologise for the inconvenience.

by Pricey at April 02, 2014 08:15 AM

April 01, 2014

Md's blog

Real out of band connectivity with network namespaces

This post explains how to configure on a Linux server a second and totally independent network interface with its own connectivity. It can be useful to access the server when the regular connectivity is broken.

This can happen thanks to network namespaces, a virtualization feature available in recent kernels.

We need to create a simple script to be run at boot time which will create and configure the namespace. First, move in the new namespace the network interface which will be dedicated to it:

ip netns add oob
ip link set eth2 netns oob

And then configure it as usual with iproute, by executing it in the new namespace with ip netns exec:

ip netns exec oob ip link set lo up
ip netns exec oob ip link set eth2 up
ip netns exec oob ip addr add 192.168.1.2/24 dev eth2
ip netns exec oob ip route add default via 192.168.1.1

The interface must be configured manually because ifupdown does not support namespaces yet, and it would use the same /run/network/ifstate file which tracks the interfaces of the main namespace (this is also a good argument in favour of something persistent like Network Manager...).

Now we can start any daemon in the namespace, just make sure that they will not interfere with the on-disk state of other instances:

ip netns exec oob /usr/sbin/sshd -o PidFile=/run/sshd-oob.pid

Netfilter is virtualized as well, so we can load a firewall configuration which will be applied only to the new namespace:

ip netns exec oob iptables-restore < /etc/network/firewall-oob-v4

As documented in ip-netns(8), iproute netns add will also create a mount namespace and bind mount in it the files in /etc/netns/$NAMESPACE/: this is very useful since some details of the configuration, like the name server IP, will be different in the new namespace:

mkdir -p /etc/netns/oob/
echo 'nameserver 8.8.8.8' > /etc/netns/oob/resolv.conf

If we connect to the second SSH daemon, it will create a shell in the second namespace. To enter the main one, i.e. the one used by PID 1, we can use a simple script like:

#!/bin/sh -e
exec nsenter --net --mount --target 1 "$@"

To reach the out of band namespace from the main one we can use instead:

#!/bin/sh -e
exec nsenter --net --mount --target $(cat /var/run/sshd-oob.pid) "$@"

Scripts like these can also be used in fun ssh configurations like:

Host 10.2.1.*
 ProxyCommand ssh -q -a -x -N -T server-oob.example.net 'nsenter-main nc %h %p'

April 01, 2014 04:26 PM

March 25, 2014

RichiH's blog

Train them to submit

And today from our ever-growing collection of what the fuck is wrong with you people?!...

This is wrong on so many levels, I can't even begin to describe it. Sadly, it seems that this will get funded. And if it does not, technology will only become cheaper over time...

by Richard &#x27;RichiH&#x27; Hartmann at March 25, 2014 02:11 PM

March 22, 2014

RichiH's blog

Lenovo X1 Carbon

Christine's accidential blog spam on planet.d.o just now gave me the chance to re-read the comments in her post.

The state from back then is still the current state on up-to-date Debian unstable:

  • Microphone button does nothing.
  • USB 3.0 docking station's display does not work thanks to the content mafiaa with an admittedly small petition trying to fix this. Obviously it is going to be ignored.
  • No one using Debian seems to care about the fingerprint reader.
  • UMTS/WWAN modem works fine on Windows, but Linux loses connection to the USB device all the time. As a result, UMTS does not work

The last item has the most impact on me. The need to tether when you have a dedicated SIM card, built-in modem, and good antennas in your laptop is... infuriating. Especially as it's working as intended on Windows.

As an added benefit, even though I saved the PIN in network manager, it asks me for the PIN every time I log in and every time after hibernating. For a device which I can't use in the first place. Argh!

by Richard &#x27;RichiH&#x27; Hartmann at March 22, 2014 07:04 AM

March 14, 2014

RichiH's blog

Git prize: Outstanding Contribution to Open Source/Linux/Free Software

In February, Linux Magazine contacted me, asking if I would be willing to accept the Linux New Media Award 2014 in the main category "Outstanding Contribution to Open Source/Linux/Free Software" on behalf of the Git community due to my involvement with evangelizing and vcsh. Needless to say, I was thrilled.

I managed to poke Junio via someone at Google and he agreed. We also reached out within the German Git community and two maintainers of git submodule, Jens Lehmann and Heiko Voigt, joined in as well. While we didn't manage to hammer out interoperability issues of vcsh and git submodule due to time constraints and too much beer, we are planning to follow up on that.

Git beat OpenStack, Python, and Ubuntu by a huge margin; sadly I don't have exact numbers (yet).

More details and a rather crummy photo can be found in Linux Magazine's article. A video of the whole thing will uploaded to this page soonish. If it appears that we kept our "speech" very short, that was deliberate after the somewhat prolonged speeches beforehand ;)

The aftershow event was nice even though the DJ refused to turn down the music down to tolerable levels; his reaction to people moving father away, and asking him to turn down the volume a bit, was to turn it up... Anyway, given the mix of people present during the award ceremony, very interesting discussions ensued. While I failed to convert Klaus Knopper to zsh and git, at least there's a chance that Cornelius Schuhmacher will start using vcsh and maybe even push for a separation of configuration and state in KDE.

The most interesting tidbits of the evening were shared by Abhisek Devkota of cyanogenmod fame. Without spilling any secrets it's safe to say that the future of cyanogenmod is looking extremely bright and that there are surprises in the works which will have quite the impact.

Last but not least, here's the physical prize:

Glass trophy held by your's truly

by Richard &#x27;RichiH&#x27; Hartmann at March 14, 2014 11:19 PM

March 08, 2014

RichiH's blog

Panem et congressūs

There's a German joke about Germans:

Q: How can you find Germans abroad?
A: You stand outside a bakery and wait until someone starts cussing.

I am constantly reminded of this joke, abroad and back home, as we do tend to take our bread seriously...

As an additional data point supporting this, the German local team just spent 20 minutes discussing bread. I somehow doubt you could make an off-hand comment about bread and garner much interest in most communities; in a German one, this works.

In mildly related news, this is what bits.debian.org will pick up soon:

During the DebConf committee meeting, it has been decided that the 16th annual Debian Conference, DebConf15, will be held in Germany next year.

Thanks to the Belgian and Swedish teams; we are looking forward to their renewed bids for future DebConfs!

Specifics as to location and date are still being nailed down and we will keep Debian as a whole informed about our progress.

A dedicated (English-language) mailing list has been created for the organization and we welcome interested people to subscribe and join the discussion.

by Richard &#x27;RichiH&#x27; Hartmann at March 08, 2014 05:26 PM

March 06, 2014

RichiH's blog

DC15.de

Here's to a happy, successful, and overall quite awesome DebConf15 in Germany.

Details to follow :)

by Richard &#x27;RichiH&#x27; Hartmann at March 06, 2014 10:04 PM

March 03, 2014

RichiH's blog

Facial recognition

Dear crazyweb,

while there seem to be good GUI tools to enable facial recognition with FLOSS, they all fall short of my requirements. And while there seem to be a lot of research projects with open code, they seem to be lacking in the "usable in real life" department.

It seems as if there should be something to scratch that itch, but I couldn't find it...

Thus, my wishlist for facial recognition software

  • MUST NOT send any data to any third parties!
  • Must run on Linux
  • GUI and CLI are both fine as long as the rest of the specs are met, but good CLI-integration would be a huge plus
  • Should offer batch-verification of detected faces like so
  • Must not rely on duplicating files in its own data structure/DB/directory; symlinks are fine
  • Should cope with source files disappearing
  • Should be able to list/diff files which are new or not yet tagged
  • Must not require being able to write to any picture files
  • Must be able to store data outside of the original pictures
  • Should not modify the source directories without being told to; temp files, face DB, and similar should all be located in a place I decide upon
  • Should offer batch-processing
  • Must be able to trigger a command or script for all verified identifications i.e. the ones I manually set to matching the person; alternatively, at least be able to export data in a way I can build scripts upon
  • Should be able to cope with faces changing over time, people growing older, getting a beard, etc
  • Should be FLOSS if at all possible
  • MUST NOT send any data to any third parties!
  • I consider tags to be permanent, the DB for the program should ideally be ephemeral but I am aware that this may not be possible
  • If the DB needs to be retained, it should ideally be in a merge-friendly text format not binary but that may be asking too much ;)
  • MUST NOT send any data to any third parties!
  • Ponies.

I will gladly follow up with a workflow blog post assuming I end up with useful feedback.

by Richard &#x27;RichiH&#x27; Hartmann at March 03, 2014 08:20 PM

February 25, 2014

erry's blog

Displaying an HTML slider’s value

HTML5 sliders, or <input type=”range”> are a cool new input type, but he problem is, the user has no indication of which value they have selected. Luckily, this is easy to fix

Firstly you can create your slider and give it a class name, and a unique id. For example:

<input type="range" class="slider" id="slider1" /><br/>
<input type="range" class="slider" value="20" id="slider2" />

Then we will see where its value will go. You can create a span whose id is the same as the slider + ‘_value’ for example. This will help us be able to access the elements with javascript later:

<input type="range" class="slider" id="slider1" />
<span id="slider1_value"></span><br/>
<input type="range" class="slider" value="20" id="slider2" />
<span id="slider2_value"></span>

Now all we have to do is loop through our sliders, and update the innerHTML of the value span as they change:

<script>
    var sliders = document.getElementsByClassName('slider');
    // class='slider' elements :p
    var len     = sliders.length;

    for ( var i = 0; i < len; i++ ) {
        var slider = sliders[i];

        slider.addEventListener('change', function() {
            updateValue(this);
        });

        updateValue(slider);
    }

    function updateValue(slider) {
        var id        = slider.id;

        if (!id) {
            return;
        }

        var val       = document.getElementById(id + '_value');
        // Find the span whose id is the id of the slider + _value..
        
        if (val) {
            val.innerHTML = slider.value; // And update it!
        }
    }
</script>

Now your sliders should have values next to them (or wherever you place your _value element) that auto-update as you change them =)

Example!


<script>function updateValue(e){var t=e.id;if(!t){return}var n=document.getElementById(t+"_value");if(n){n.innerHTML=e.value}}var sliders=document.getElementsByClassName("slider");var len=sliders.length;for(var i=0;i<len></script>
That's all for now! See you next time =)

by Errietta Kostala at February 25, 2014 09:35 PM

CodeIgniter core classes

One of the things I liked about codeigniter is it lets you create and extend core classes. For example, say I want to extend the functionality of the controller class so that it does not allow me to access certain pages unless I am logged in:

Simply create application/core/MY_Controller.php to extend the core controller class. Then you can do this:

<?php if ( ! defined('BASEPATH')) exit('No direct script access allowed');

   class MY_Controller extends CI_Controller {
    protected $user;

    public function __construct() {
        parent::__construct();

        $this->load->helper('url');
        $this->load->library('session');

        $this->user     = $this->session->userdata('user');
        # or your own way to check if the user is logged in
        $page           = uri_string();

        if (
            !$this->user
            && strpos ( $page, 'login' ) === false
            && strpos ( $page, 'register' ) === false
            && $page != ''
        ) { # this allows you to access pages matching 'login', 'register',
            # and the index page.
            # you can configure this of course.
            $this->session->set_flashdata(
                'error',
                'You must log in to see this page.'
            );

            redirect ('/login');
        }
    }
}
?>

Now when you write your own controller you would extend MY_Controller instead of CI_Controller, for example:

<?php
class Profile extends MY_Controller {

And all your pages will have this functionality. As an added bonus, every controller that extends MY_Controller will have the user session in $this->user.

Another cool thing you can do by extending core controllers is writing functions you want every controller to have available:

<?php if ( ! defined('BASEPATH')) exit('No direct script access allowed');

   class MY_Controller extends CI_Controller {
    protected $user;

    public function __construct() {
        parent::__construct();

        $this->load->helper('url');
        $this->load->library('session');

        $this->user     = $this->session->userdata('user');
        # or your own way to check if the user is logged in
        $page           = uri_string();

        if (
            !$this->user
            && strpos ( $page, 'login' ) === false
            && strpos ( $page, 'register' ) === false
            && $page != ''
        ) { # this allows you to access pages matching 'login', 'register',
            # and the index page.
            # you can configure this of course.
            $this->session->set_flashdata(
                'error',
                'You must log in to see this page.'
            );

            redirect ('/login');
        }
    }

    protected function stash_input() {
        foreach ($this->input->post() as $key => $value) {
            $this->session->set_flashdata($key, $value);
        }
    }
}
?>

stash_input here, for example, will store all of $_POST in your flash data.
this is useful for postback forms.
For example, if an error happens when submitting the form, you can store the input in the session data and then use it in your view:

//...
if ($error) {
$this->stash_input();
$this->session->set_flashdata("error", $error);
redirect(site_url('/your_form'));
return;
}
//...

Overall, extending core classes can be very helpful and is one of my favourite features of codeigniter!

That’s all for now. Until next time, see ya!

by Errietta Kostala at February 25, 2014 10:32 AM

February 20, 2014

Md's blog

Automatically unlocking xscreensaver in some locations

When I am at home I do not want to be bothered by the screensaver locking my laptop. To solve this, I use a custom PAM configuration which checks if I am authenticated to the local access point.

Add this to the top of /etc/pam.d/xscreensaver:

auth sufficient pam_exec.so quiet /usr/local/sbin/pam_auth_xscreensaver

And then use a script like this one to decide when you want the display to be automatically unlocked:

#!/bin/sh -e

# return the ESSID of this interface
current_essid() {
  /sbin/iwconfig $1 | sed -nre '/ESSID/s/.*ESSID:"([^"]+)".*/\1/p'
}

# automatically unlock only for these users
case "$PAM_USER" in
  "")   echo "This program must be run by pam_auth.so!"
        exit 1
        ;;
  md)   ;;

  *)    exit 1
        ;;
esac

CURRENT_ESSID=$(current_essid wlan0)

# automatically unlock when connected to these networks
case "$CURRENT_ESSID" in
  MYOWNESSID) exit 0 ;;
esac

exit 6

February 20, 2014 05:49 AM

February 15, 2014

RichiH's blog

On init system debates

Qouth Rogério Brito:

Russ, thank you for your exemplary role.

And that's all that was left to say.

by Richard &#x27;RichiH&#x27; Hartmann at February 15, 2014 09:02 AM

February 04, 2014

freenode staffblog

Turbulence

As many of you will be aware, freenode has been experiencing intermittent instability today, as the network has been under attack. Whilst we have network services back online, the network continues to be a little unreliable and users are continuing to report issues in connecting to the network.

We appreciate the patience of our many wonderful users whilst we continue to work to mitigate the effects this has on the network.

We also greatly appreciate our many sponsors who work with us to help minimise the impact and who are themselves affected by attacks against the network.

We’ve posted on this subject before, and what we said then remains as true as ever – and for those of you who didn’t read the earlier blogpost first time round, it’s definitely worth perusing it now if this subject interests or affects you.

Thank you all for your patience as we continue to work to restore normal service!

[UPDATE 04/02/2014]

At the moment SASL authentication works only on PLAINTEXT, *not* BLOWFISH. We’ve checked and TOR should be working too. Sadly wolfe.freenode.net will be taken off the rotation, so those users who’ve connected specifically to it, please make sure that your client points to our recommended roundrobin of chat.freenode.net!

by njan at February 04, 2014 04:53 PM

January 30, 2014

Md's blog

On people totally opposed to systemd

Do you remember the very vocal people who, a decade ago, would endlessly argue that udev was broken and that they would never use it?

Percentage over time of systems on which udev is installed

Sometimes you can either embrace change or be dragged along by it. We are beyond the inflection point, and the systemd haters should choose their place.

January 30, 2014 05:15 AM

January 27, 2014

RichiH's blog

You at FOSDEM?

Behind the scenes, all the pieces are falling together to make this FOSDEM the best ever. As every year ;)

Part of making things even more awesome is a great effort: To tape ALL the things. As in, literally, every single talk, discussion, and lightning talk; devrooms included. We have the extra equipment, we have the extra storage space, but we don't have all the extra manpower, yet.

Why not sign up as a volunteer? There's a self-checkout system where you simply sign up for tasks. Heralding and video-taping are especially simple to do if you plan to attend the respective track/devroom, anyway.

If you prefer talking over listening, cloak room and infodesk are available asa well; they are a lot more social and people always tell us it's great fun.

Finally, if your schedule is full to the brim, you can help us with build-up and teardown.

As an aside, there's a little experiment with IPv6 in the works.

by Richard &#x27;RichiH&#x27; Hartmann at January 27, 2014 06:50 PM

Conference proceedings

Seems I had exactly the same idea two years ago, down to choosing exactly the same name and directory structure...

In case anyone finds it useful, there's Conference proceedings: a public git-annex repository which contains videos from various conferences. As of right now, it contains only FOSDEM up to 2008 and last year's Chaos Communication Congress, 30c3. I expect to add more and more conferences over time and patches are always welcome.

Just try and make sure you don't include location data about your own repository in said commits. But there's already something in the works to fix that problem.

by Richard &#x27;RichiH&#x27; Hartmann at January 27, 2014 06:50 PM

January 19, 2014

erry's blog

Things to do while Microsoft Visual Studio installs

Microsoft visual studio (We’re forced to use it for uni, ugh) seems to take forever and a day to install. Here’s a few fun things you can do while it does that:
(Note that this is a joke, don’t take it /too/ seriously ;))

  • Watch the lord of the rings trilogy and all the harry potter movies 10 times
  • Play through every game in the Zelda series, aim for 100% completion
  • Install Linux in 1000 machines
  • Learn a new language
  • Watch paint dry
  • Travel around the world
  • Become an expert in martial arts
  • Count to infinity
  • Solve the P VS NP Problem
  • Go to university and graduate
  • Write this blog post :p

    by Errietta Kostala at January 19, 2014 09:49 PM

    January 13, 2014

    erry's blog

    A look at codeigniter

    We’re using Codeigniter at university in our second term, so naturally I had to look at it during the holiday break before it started. Codeigniter is a PHP MVC framework. I prefer it to writing PHP from scratch, but my personal opinion is that I still like Catalyst more. Yet again, Catalyst is Perl based, while Codeigniter is PHP.

    Anyway, I’m going to look into making a simple registration and login system from scratch. The first step is, of course, to download codeigniter. You can get it from its website. You unzip the file somewhere where your webserver will service it, and check it out on your browser. Congratulations, you have the most basic setup ready!

    Looking at the contents of your new folder, application/ has all the things you should care about. Specifically, config/ has all the configuration-related stuff, controllers/ is where the controllers go, models/ is where the models go, and views/ is where the views go.

    The best place to start is the config. Having a look at the application/config folder we first have config.php which has some global settings. There are a few things you may want to change there, but nothing that will prevent the application from working. If accepting user input, however, I like to enable CSRF protection and XSS protection. In that file you can find

    $config['csrf_protection'] = FALSE;
    $config['csrf_token_name'] = 'csrf_test_name';
    $config['csrf_cookie_name'] = 'csrf_cookie_name';
    $config['csrf_expire'] = 7200;
    

    You can change that to something like:

    $config['csrf_protection'] = true; //You need to set this to true,
    $config['csrf_token_name'] = 'token'; //but these settings can be
    $config['csrf_cookie_name'] = 'token'; //whatever you want.
    $config['csrf_expire'] = 7200;
    

    Then, codeigniter will take care of CSRF protection for you! There’s also this:

    $config['global_xss_filtering'] = FALSE;
    

    If you change that to true, you will get global XSS protection. There are other settings in the file you may want to change, so look around.

    Next important file is application/config/database.php. If you’re doing anything database related, then you need to edit that. It’s pretty self explanatory, but

    $db['default']['hostname'] = 'localhost';
    $db['default']['username'] = '';
    $db['default']['password'] = '';
    $db['default']['database'] = ''; //You may want to create a database for your app
                                     //And put its name here.
    $db['default']['dbdriver'] = 'mysql';
    

    Is what you absolutely need to change. For the driver, I suggest mysqli and not the default value of mysql, since it has been deprecated for ages.

    There’s also application/config/routing.php which determines how routing works in your app. We’ll look at that in a bit though.

    Now that the config part is done with for now, we can take a first look at controllers. If you look at application/controllers/welcome.php You have a very basic controller that basically does this:

    public function index()
    {
          $this->load->view('welcome_message');
    }
    

    index() is called when a user goes to http://your_website/index.php/welcome/, but because the routing config (see above) chooses ‘welcome’ as the default controller, http://your_website/ will load the same page.
    In fact, you could do this:

    public function hello()
    {
          $this->load->view('welcome_message');
    }
    

    And then http://your_website/index.php/welcome/hello will display the same message.
    You might guess where that page comes from: application/views/welcome_message.php – change that page and the page displayed when you load the welcome page will be loaded.

    Of course the whole point is to make your own views, controllers and models. Since we want to make a registration system, it’s probably good to start from the database. We’ll make a users table:

    CREATE TABLE IF NOT EXISTS `users` (
      `user_name` varchar(255) NOT NULL,
      `user_id` bigint(255) unsigned NOT NULL AUTO_INCREMENT,
      `password` varchar(255) NOT NULL,
      `email` varchar(255) NOT NULL,
      PRIMARY KEY (`user_id`),
      UNIQUE KEY `unique_user` (`user_name`,`email`)
    ) ENGINE=InnoDB  DEFAULT CHARSET=latin1;
    

    We can then make a model to reflect this database table. You can create all your models in application/models, so this would be application/models/user_model.php or similar

    <?php
    
        if ( !defined('BASEPATH') ) {
            exit('No direct script access allowed');
        }
    
        class user_model extends CI_Model {
            public function __construct() {
                $this->load->database(); //load the database class
            }
    
            public function login() {
                $username = $this->input->post('username');
                $password = $this->input->post('password');
    
                $query    = $this->db->get_where (
                    'users',
                    array (
                        'user_name' => $username
                    )
                );
    
                $user     = $query->row();
    
                if ( !$user ) {
                    return -1; //no such username
                }
    
                if ( $password != $user->password ) {
                    return 0; //wrong password
                }
    
                $_SESSION['user'] = $user->user_id;
                return 1; //correct
            }
    
            public function register() {
                $password   = $this->input->post('password');
    
                $data = array (
                    'user_name'    => $this->input->post('username'),
                    'password'     => $password,
                    'email'        => $this->input->post('email')
                );
    
                $this->db->insert ('users', $data);
    
                $_SESSION['user'] = $this->db->insert_id();
            }
        }
    ?>
    

    This is pretty self explanatory. It loads the database class, and defines a login function that checks if the database data matches the input data and a register one that inserts to the database. get_where allows you to get results for a where query, and insert inserts an array of keys and values to the specific database fields. You may see that we don’t check the input here. This is ok, we’ll do this in our controller.

    Now that we have our model we need a couple views. application/views/register.php:

    <?php
        $this->load->helper('form');
        echo form_open('register/submit');
    ?>
            <fieldset>
                <label>
                    <legend>Register</legend>
                    <label>
                        User name:<br/>
                        <input type="text" name="username" value="" />
                    </label><br/>
                    <label>
                        Email:<br/>
                        <input type="email" name="email" value="" />
                    </label><br/>
                    <label>
                        Password:<br/>
                        <input type="password" name="password" value="" />
                    </label><br/>
                    <input type="submit" name="" value="register" />
                </label>
            </fieldset>
        </form>
    

    You may notice that instead of a

    tag we use form_open. The reason for this is that this method also takes care of inserting the counter-CSRF fields if we enabled this in the config.

    Similarly, application/views/login.php:

    <?php
        $this->load->helper('form'); //this form helper gives us form_open
        echo form_open('login/submit');
    ?>
            <fieldset>
                <label>
                    <legend>Login</legend>
                    <label>
                        User name:<br/>
                        <input type="text" name="username" value="" />
                    </label><br/>
                    <label>
                        Password:<br/>
                        <input type="password" name="password" value="" />
                    </label><br/>
                    <input type="submit" name="" value="login" />
                </label>
            </fieldset>
        </form>
    

    And now we can actually make our controllers, which will be simple since we have our user model ready.
    This is our register controller – application/controllers/register.php

    <?php
        session_start();
    
        class register extends CI_Controller {
            public function __construct() {
                parent::__construct();
                $this->load->model('user_model'); //load our user model
                //When you load it, you can use all its methods through
                //$this->user_model
                $this->load->helper('form'); //and the form helper
            }
    
            public function index() {
                $this->load->view('register'); //just display the form
            }
    
            public function submit() {
                $this->load->library('form_validation');//load the form validation lib
    
                $this->form_validation->set_rules(
                'username',
                'Username',
                'required|max_length[255]|is_unique[users.user_name]'
                ); //this will ensure the username isn't already in the database,
                   //that it's been provided,
                   //and that it's no more than 255 chars long
                $this->form_validation->set_rules('password',
                'Password', 'required|max_length[55]');
                $this->form_validation->set_rules('email', 'Email',
                'required|valid_email|is_unique[users.email]');
                 //Similarly, this will ensure
                 //The email is unique and valid.
    
                if ( $this->form_validation->run() === false ) {
                    $this->index(); //if the form validation fails show the index
                } else {
                    $this->user_model->register();
                    //this will create the db record from input
                    //do something else
                    //If you want to redirect a user to a page after this:
                    $this->load->helper('url');
                    redirect('/user');
                }
            }
        }
    ?>
    

    We can see how codeigniter made our lives easier here. Instead of writing the same boring validation methods again and again we get them provided to us, plus the database helper doesn’t hurt either.

    the login controller will be like this:

    <?php
        session_start();
    
        class login extends CI_Controller {
    
            public function __construct() {
                parent::__construct();
                $this->load->model('user_model');
                $this->load->helper('form');
    
            }
    
            public function index($error="") {
                $this->load->view('login', array ('error' => $error));
            }
    
            public function submit() {
                $this->load->library('form_validation');
    
                $this->form_validation->set_rules('username', 'Username', 'required');
                $this->form_validation->set_rules('password', 'Password', 'required');
    
                if ( $this->form_validation->run() === false ) {
                    $error = 'Please provide a username and password';
                    $this->index($error);
                } else {
                    $result = $this->user_model->login();
    
                    if ($result == -1) {
                        $error = "No such username";
                        $this->index($error);
                    } elseif ($result == 0) {
                        $error = 'Wrong password!';
                        $this->index($error);
                    } else {
                        //Success!
                        //Again, you can now...
                        $this->load->helper('url');
                        redirect('/');
                    }
                }
            }
        }
    ?>
    

    You might have noticed two things in the above. 1: Where do validation errors go? 2: What does

       $this->load->view('login', array ('error' => $error));
    

    do?

    As for the first question, the answer is validation_errors(). It’s a method of our form helper and you can use it in your view:

    <?php
            if (validation_errors()) {
    ?>
                <div class="error">
                    <?php
                        echo validation_errors();
                    ?>
                </div>
    <?php
            }
    ?>
    

    And for the second question, this just passes a variable into the view. This means we can do:

    <?php
    if (isset($error)) {
    ?>
        <div class="error">
            <?php
                echo $error;
            ?>
        </div>
    <?php
    }
    ?>
    

    You can use this to pass any required data to your model, such as database query results, etc.

    Congratulations, you should now have a working login system! You can of course see this in action at http://your_app/index.php/register/ and
    http://your_app/index.php/login respectively.

    The next thing to do from here is to look at The manual to look at how helpers work and other awesome things you can do.

    Until next time,

    by Errietta Kostala at January 13, 2014 05:01 PM

    December 31, 2013

    erry's blog

    The struggle with year 2 project!

    Hi,

    This is a bit of a more personal post (in that it talks about how I approached a project rather than being a tutorial) but I thought it’d be interesting to share anyway.

    Our year 2 project in University is actually really open — we can work on pretty much whatever we want. I wanted to work with something new and exciting, so I chose to do a project built around WebRTC. People would be able to make accounts and build teams then arrange and hold audio and video meetings that worked using WebRTC. There would also be a reminder before joining a meeting etc.

    I had no experience with WebRTC, so I knew this would be challenging — and I wasn’t wrong. The first problem was the actual signalling to get the WebRTC conversation to start. You can use about anything for that, but the best way is socket.io because it supports multiplexing, and you can have signalling with more than two clients. Problem is, I hadn’t realised this at first. Some form of signalling worked even with AJAX, and I switched to websockets soon afterwards, and it worked as long as I had two clients. I couldn’t get it to work with more than two though, and after spending countless hours with this article I finally got a three-way chat to work using a socket.io server (and changing my code to match the one in that tutorial).

    After that it got a bit quieter, tidying the GUI app and adding some of the more simple functionality such as log in, register, reminder emails etc. But it’s about to get busy again as I’ll attempt to do file transfer and desktop sharing too, to offer a fully-featured meeting suite ^_^
    Of course I played it safe by not including these parts in the developed Minimum Viable Product, so if I don’t succeed I won’t fail the assignment :p

    It’s overall a great project and I’m learning a lot for it, although it can get challenging with the WebRTC api and all!

    And I think that’s it from me in 2013 ;) see you next year!

    PS: If you want to know more about WebRTC, I’d read this, this, and this article, watch the video and check out the code.

    by Errietta Kostala at December 31, 2013 08:00 PM

    December 28, 2013

    RichiH's blog

    Release Critical Bug report for Week 52

    I had been pondering to do an end-of-year bug stat post. Niels Thykier forced my hand, so here goes :)

    The UDD bugs interface currently knows about the following release critical bugs:

    • In Total: 1353
      • Affecting Jessie: 476 That's the number we need to get down to zero before the release. They can be split in two big categories:
        • Affecting Jessie and unstable: 410 Those need someone to find a fix, or to finish the work to upload a fix to unstable:
          • 52 bugs are tagged 'patch'. Please help by reviewing the patches, and (if you are a DD) by uploading them.
          • 28 bugs are marked as done, but still affect unstable. This can happen due to missing builds on some architectures, for example. Help investigate!
          • 330 bugs are neither tagged patch, nor marked done. Help make a first step towards resolution!
        • Affecting Jessie only: 66 Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
          • 0 bugs are in packages that are unblocked by the release team.
          • 66 bugs are in packages that are not unblocked.

    Graphical overview of bug stats thanks to azhag:

    by Richard &#x27;RichiH&#x27; Hartmann at December 28, 2013 08:17 AM

    December 27, 2013

    RichiH's blog

    Random rant

    Q: What user space program can reliably lock up a Lenovo X1 Carbon with Intel Core i7-3667U, 8 GB RAM, and an Intel SSDSCMMW240A3L so badly that you can't ssh into it any more and ctrl-alt-f1 does not work? So badly that, after half an hour of waiting, the only thing that's left is to shut if off hard.

    A: Google Chrome with 50+ Flickr tabs open.

    Q: Why?

    A: I honestly don't know.

    by Richard &#x27;RichiH&#x27; Hartmann at December 27, 2013 10:08 PM

    December 17, 2013

    RichiH's blog

    Chilling effects

    When a porn joke becomes a polictical statement.

    And when you think for several minutes if you want to write a blog post with this title as it's quite obviously a trigger word.

    by Richard &#x27;RichiH&#x27; Hartmann at December 17, 2013 11:50 AM

    December 14, 2013

    RichiH's blog

    SteamOS

    So SteamOS has been released.

    While that's marginally interesting in and as of itself, there are two observations to be made:

    1. Microsoft, Nintendo, and Sony will feel an impact. More functionality on cheaper hardware which can easily by upgraded; I bet quite a few managers are not happy, at the moment.
    2. While the stand-alone Linux Steam client was initially targeted at Ubuntu, SteamOS is based on Debian Wheezy.

    Actual Linux (not Android) for the end user

    The first one means more Linux installations. In the living room. On a machine that children are really focussed on and will want to play with, quite literally.

    The next logical step is for people who play games to install SteamOS on their other machines; desktops, laptops, everywhere they want to game.

    This could really be the tipping point where the average adolescent computer enthusiast does not need to reboot into Linux to fool around with, but the other way round: To need to boot to Windows for a few select legacy applications which don't run on the FLOSS variant of Wine like Office or Photoshop. And once this momentum starts to shift, other software vendors will follow the money trail.

    Could 2014 finally be.... the year of Linux on the desktop...?

    Debian vs Debian-based

    The latter one is also really interesting... Obviously, I don't know why Valve decided to go down this road, but there are several reasons which come to mind:

    • No need for the extra bloat
    • They wanted to avoid the tie-in with another for-profit entity
    • Unhappiness with some technical decisions made by UbuntuCanonical
    • Lack of faith in the long-term governance of Ubuntu

    What we are left with is a major player entering the ring of Linux for end-users and choosing Debian over Ubuntu. Hopefully, improvements to the base system will be fed back upstream, enabling all Debian-based distributions to profit easily, not only Ubuntu-based ones.

    I am willing to bet that two years ago, SteamOS would have been based on Ubuntu, not Debian. Recently, there's been a lot of backlash over various decisions which Canonical forced onto Ubuntu and it will be interesting to see how this plays out in the long run... I will be interesting to see how much pain Linux Mint and Kubuntu will endure.

    Forecast

    Users

    All in all, we are looking at a massive influx of new users into the Debian ecosystem. How massive? 65 million registered users massive. 7 million concurrent users at once, 1.2 million users actively playing the top 100 games at the same time massive. This is huge.

    Contributors

    In time, a substantial part of that userbase will switch over one or more of their machines over to SteamOS.

    The tinkerers among them will realize they can install plain Debian and install Steam as a package.

    The hackers among those will start to improve upon their systems; and what better way to do that then to go upstream?

    If even a tiny fraction of users makes it this far, the count of actively involved contributors with Debian will skyrocket if we let them join. Raspbian and some other not-quite-ideal decisions come to mind.

    Vendors

    Commercial software vendors need to stay profitable. Thus, they are forced to support distributions which promise enough paying users. In the past, this meant mainly SuSE and Red Hat; they had commercial backers, went through certifications, etc. In the recent past, this also meant Ubuntu.

    All of a sudden, Debian stable has a potential market of tens of millions of average computer users and computer enthusiasts. A lot of whom will want to continue to use their OS of choice at work, as well.

    Oh boy...

    by Richard &#x27;RichiH&#x27; Hartmann at December 14, 2013 11:57 AM

    December 01, 2013

    Md's blog

    Easily installing Debian on a Cubieboard

    I recently bought a Cubieboard to replace my old Sheevaplug which has finally blown a power supply capacitor (this appears to be a common defect of Sheevaplugs), so I am publishing these instructions which show how to install Debian on sunxi systems (i.e. based on the Allwinner A10 SoC or one of its newer versions) with no need for cross compilers, emulators or ugly FAT partitions.

    This should work on any sunxi system as long as U-Boot is at least version 2012.10.

    The first step is to erase the beginning of SD card to remove anything in the unpartitioned space which may confuse U-Boot, partition and format it as desired. The first partition must begin at 1MB (1024*1024/512=2048 sectors) because the leading unpartitioned space is used by the boot loaders.

    dd if=/dev/zero of=/dev/mmcblk0 bs=1M count=1
    parted /dev/mmcblk0
    
      mklabel msdos
      mkpart primary ext4 2048s 15G
      unit s
      print
      mkpart primary linux-swap ... -1
    
    mkfs.ext4 -L root /dev/mmcblk0p1
    mkswap --label swap /dev/mmcblk0p2
    

    Download the boot loaders and an initial kernel and install them:

    tar xf cubieboard_hwpack.tar.xz
    dd if=bootloader/sunxi-spl.bin of=/dev/mmcblk0 bs=1024 seek=8
    dd if=bootloader/u-boot.bin of=/dev/mmcblk0 bs=1024 seek=32
    
    mount /dev/mmcblk0p1 /mnt
    mkdir /mnt/boot/
    
    cp kernel/script.bin kernel/uImage /mnt/boot/
    

    script.bin is Allwinner's proprietary equivalent of the device tree: it will be needed until sunxi support will be fully merged in mainline kernels.

    U-Boot needs to be configured to load the kernel from the ext4 file system (join the lines at \\, this is not a supported syntax!):

    cat << END > /mnt/boot/uEnv.txt
    # kernel=uImage
    root=/dev/mmcblk0p1 rootwait
    boot_mmc=ext4load mmc 0:1 0x43000000 boot/script.bin && ext4load mmc 0:1 0x48000000 boot/${kernel} \\
      && watchdog 0 && bootm 0x48000000
    END
    

    Now the system is bootable: add your own root file system or build one with debootstrap. My old Sheevaplug tutorial shows how to do this without a working ARM system or emulator (beware: the other parts are quite obsolete and should not be trusted blindly).

    If you have an old armel install around it will work as well, and you can easily cross-grade it to armhf as long as it is up to date to at least wheezy (the newer, the better).

    You can also just use busybox for a quick test:

    mkdir /mnt/bin/
    dpkg-deb -x .../busybox-static_1.21.0-1_armhf.deb .
    cp bin/busybox /mnt/bin/
    ln -s busybox /mnt/bin/sh
    

    After booting the busybox root file system you can run busybox --install /bin/ to install links for all the supported commands.

    Until Debian kernels will support sunxi (do not hold your breath: there are still many parts which are not yet in mainline) I recommend to install one of Roman's kernels:

    dpkg -i linux-image-3.4.67-r0-s-rm2+_3.4.67-r0-s-rm2+-10.00.Custom_armhf.deb
    mkimage -A arm -O linux -T kernel -C none -a 40008000 -e 40008000 \
      -n uImage -d /boot/vmlinuz-3.4.67-r0-s-rm2+ /boot/uImage-3.4.67-r0-s-rm2+
    

    It is not needed with these kernels for most setups, but an initramfs can be created with:

    update-initramfs -c -k 3.4.67-r0-s-rm2+
    mkimage -A arm -T ramdisk -C none -n uInitrd \
      -d /boot/initrd.img-3.4.67-r0-s-rm2+ /boot/uInitrd-3.4.67-r0-s-rm2+
    

    /boot/uEnv.txt will have to be updated to load the initramfs.

    Since the Cubieboard lacks a factory-burned MAC address you should either configure one in script.bin or (much easier) add it to /etc/network/interfaces:

    iface eth0 inet dhcp
            hwaddress ether xx:xx:xx:xx:xx:xx
    

    To learn more about the Allwinner SoCs boot process you can consult 1 and 2.

    December 01, 2013 12:40 PM

    November 27, 2013

    RichiH's blog

    pdiffs

    Can we stop pretending that defaulting to pdiffs was ever a good idea, now?

    # aptitude update
    [...pain...]
    Get:431 http://ftp5.gwdg.de testing/main 2013-11-27-1437.17.pdiff [46 B]
    Fetched 2.445 kB in 9min 15s (4.401 B/s)
    
    # aptitude -o Acquire::Pdiffs=false
    [...joy...]
    Get:17 http://ftp5.gwdg.de testing/non-free Translation-en [69,4 kB]
    Fetched 616 kB in 7s (85,7 kB/s)
    
    

    by Richard &#x27;RichiH&#x27; Hartmann at November 27, 2013 09:52 PM

    November 08, 2013

    mrmist's blog

    Broadband Rant


    This article is tagged with:

    It’s hard to see how targets of 90% + coverage are going to be met in this country, when we can’t get fibre broadband even on our new housing estate in a redeveloping area.

    The next street along has an upgraded cabinet, and one further down has an upgraded cabinet, but our cabinet remains outdated. I am told that our cabinet does not meet the “financial criteria” due to having too few houses connected. To me, it seems that hardly any similar cabinets would ever make the criteria – in other words, unless you happen to be extremely lucky, or you’re living in a city centre, you can forget it, regardless of promises to cover 90% of the UK.

    I guess what that 90% figure means is that 90% of exchanges will be capable of providing fibre broadband – even if only 1/3 of the connected cabinets can.

    It seems to me that having financial criteria from a single provider makes the aims of the project incompatible with the implementation.

    by Mrmist at November 08, 2013 08:20 AM

    November 02, 2013

    Md's blog

    New PGP key

    Since my current PGP key is a 1024 bits DSA key generated in 1998, I decided that it is time to replace it with a stronger one: there are legitimate concerns that breaking 1024 bits DSA is well within the reach of major governments.

    I have been holding out for the last year waiting for GnuPG 2.1, which will support elliptic curves cryptography, but I recently concluded that adopting ECC now would not be a good idea: Red Hat still does not fully support it due to unspecified patent concerns and there is no consensus in the cryptanalists community about the continued strength of (some?) ECC algorithms.

    So I created three fancy keys: a 4096 bits main key for offline storage, which hopefully will be strong enough for a long time, and two 3072 bits subkeys for everyday use.

    I have published a formal key transition statement and I will appreciate if people who have signed my old key will also sign the new one.

    What follows are the instructions that I used to generate these PGP keys. They follow the current best practices and only reference modern software.

    While the GnuPG defaults are usually appropriate, I think that it is a good idea to use a stronger hash for the key signatures of very long-lived keys. I could not find a simple way to "upgrade" the algorithm of key self signatures.

    echo 'cert-digest-algo SHA256' >> ~/.gnupg/gpg.conf

    First, generate a RSA/4096 sign only key, which will be your master key and may be stored offline. Then add to it two RSA/3072 subkeys (one sign only and one encrypt only):

    # generate a RSA/4096 sign only key
    gpg2 --gen-key
    # add two RSA/3072 subkeys (sign only and encrypt only)
    gpg2 --edit-key 8DC968B0

    Since GnuPG lacks a command to remove the master secret key while keeping its secret subkeys, you need to delete the complete secret keys and then re-import only the subkeys:

    gpg2 --export-secret-keys 8DC968B0 > backup.secret
    gpg2 --export-secret-subkeys 8DC968B0 > backup.subkeys
    gpg2 --delete-secret-key 8DC968B0
    gpg2 --import backup.subkeys

    Then you can import again the complete keys in a different secret keyring, which can be stored offline:

    mkdir ~/.gnupg/master/
    gpg2 --no-default-keyring \
      --keyring ~/.gnupg/pubring.gpg \
      --secret-keyring ~/.gnupg/master/secring.gpg \
      --import backup.secret

    Now you can move ~/.gnupg/master/ to a USB stick. You are supposed to protect the master secret key with a strong passphrase, so there is no point in using block level encryption on the removable media.

    Since you are only using the master key to sign other keys, it only needs to be configured as the second keyring in ~/.caffrc:

    $CONFIG{'secret-keyring'} = $ENV{HOME} . '/.gnupg/master/secring.gpg';

    It is also a good idea to have an hard copy backup of your keys, since the lifetime of USB sticks should not be trusted too much:

    paperkey -v --output printable.txt --secret-key backup.secret
    a2ps -2 --no-header -o printable.ps printable.txt

    Some references that I used:

    November 02, 2013 11:03 PM

    erry's blog

    Object-Oriented Programming (OOP) in PHP

    Hello and welcome to a (late) Halloween post! We’re going to discuss PHP OOP by making classes for Ghosts, Zombies, and Vampires.
    But before that, a little introduction. If you’ve read my javascript OOP post you may have an idea of what to expect. If not, don’t worry, you don’t need to read it. In a nutshell, objects are structures can hold other variables (properties) or functions (methods), and interact with other objects.

    The first step towards OOP in PHP is to make a class. A class defines the methods and properties of objects. Classes can also inherit methods from other classes, but we’ll look at that in a bit. You can think of a class as a template or design for every object of that class.

    Imagine a ghost. It has a colour (white, red, blue), and maybe a name. Additionally, it can attack humans by scaring them. You could make a class similar to this:

    <?php
    class Ghost { //use class to define classes!
        private $colour;
        private $name;
    
        public function attack ($human_name) {
            print $this->name . ' attacked ' . $human_name . ' by scaring them!';
        }
    }
    ?>
    

    Let’s explain a few things.

        private $colour;
    

    To define object properties and methods, you can use the private, public, or protected keywords. ‘private’ will prevent any code outside that class* from accessing the property, ‘protected’ will allow classes that extend the class (children) to access the property as well, and ‘public’ will allow any piece of code to access the property

    * Parents can always access children’s properties.

    public function attack ($human_name) {
        print $this->name . ' attacked ' . $human_name . ' by scaring them!';
    }
    

    $this works only inside object methods, and always refers to the object you’re working with.

    Now we can create our objects.

    <?php
    $ghost = new Ghost();
    ?>
    

    Of course that’s not very useful as it is. We can’t change the name or colour here, because it’s private. We may not want to make our properties public and accessible to the whole world, either. So we could make ‘getters’ and ‘setters’.
    Basically, these are public methods that allow us to indirectly access private properties. One advantage of them is that we could then deny certain values if necessary, or call other required code.

    <?php
    class Ghost { //use class to define classes!
        private $colour;
        private $name;
    
        public function attack ($human_name) {
            print $this->name . ' attacked ' . $human_name . ' by scaring them!';
        }
    
        public function get_colour () {
            return $this->colour;
        }
    
        public function get_name () {
            return $this->name;
        }
    
        public function set_colour ($new_colour) {
            $this->colour = $new_colour;
        }
    
        public function set_name ($new_name) {
            $this->name = $new_name;
        }
    }
    ?>
    

    Now we can do this:

    <?php
    $ghost = new Ghost();
    $ghost->set_name('erry');
    $ghost->set_colour('pink');
    $ghost->attack('reader');
    ?>
    

    All these properties and methods will be shared by every single ghost object. Convenient and tidy, right?

    You can take this a bit further with a bit of magic. Every time you create an object with new Ghost(); a constructor function is called. You can define this constructor to do something you want it to:

    <?php
    class Ghost { //use class to define classes!
        private $colour;
        private $name;
    
        public function __construct ($name, $colour) { //Constructor!
            $this->name = $name;
            $this->colour = $colour;
        }
    
        public function attack ($human_name) {
            print $this->name . ' attacked ' . $human_name . ' by scaring them!';
        }
    
        public function get_colour () {
            return $this->colour;
        }
    
        public function get_name () {
            return $this->name;
        }
    
        public function set_colour ($new_colour) {
            $this->colour = $new_colour;
        }
    
        public function set_name ($new_name) {
            $this->name = $new_name;
        }
    }
    ?>
    

    You can then use it like this:

    <?php
    $ghost1 = new Ghost('erry','pink');
    $ghost1->attack('reader');
    $ghost2 = new Ghost('reader','blue');
    $ghost2->attack('erry');
    ?>
    

    We talked about extending classes earlier. Now we can demonstrate this. Imagine you have similar objects that aren’t quite the same: Ghosts, Zombies, and Vampires. They’re all monsters, and they all have colours and names, but they attack people in a different way. If you wanted to write classes for all of them, you’d be copy-pasting code a lot. Instead, you could write a Monster class with the common methods and properties, and extend it for methods and properties that change.

    <?php
    class Monster { //use class to define classes!
        protected $colour;
        protected $name;
    
        public function __construct ($name, $colour) { //Constructor!
            $this->name = $name;
            $this->colour = $colour;
        }
    
        public function get_colour () {
            return $this->colour;
        }
    
        public function get_name () {
            return $this->name;
        }
    
        public function set_colour ($new_colour) {
            $this->colour = $new_colour;
        }
    
        public function set_name ($new_name) {
            $this->name = $new_name;
        }
    }
    
    class Ghost extends Monster { //Child classes 'extend' the parent.
        public function attack ($human_name) {
            print $this->name . ' attacks ' . $human_name . ' by scaring them!';
        }
    }
    
    class Zombie extends Monster {
        public function attack ($human_name) {
            print $this->name . ' attacks ' . $human_name . ' by eating their brains!';
        }
    }
    
    class Vampire extends Monster {
        public function attack ($human_name) {
            print $this->name . ' attacks ' . $human_name . ' by drinking their blood!';
        }
    }
    ?>
    

    As you can see, we just changed the Ghost class to Monster, and removed attack. We then made three child classes that extend it, and just implement the attack method.
    Note that we also made the two private properties protected, so that the children inherit them. They won’t be inherited when extending if they’re private.

    <?php
    $ghost = new Ghost('erry','pink');
    $ghost->attack('reader');
    $zombie = new Zombie('erry','blue');
    $zombie->attack('reader');
    $vampire = new Vampire('erry','white');
    $vampire->attack('reader');
    ?>
    

    Will then work as expected :D

    You could also declare the attack method in the Monster class without defining it. By making it abstract, you could force all children of the class to implement it

    abstract class Monster {
    //...
        abstract public function attack ($human_name);
    }
    

    Note how that function is declared without being implemented. Classes that extend Monster are now forced to implement the attack method. abstract also prevents you from directly creating Monster objects, forcing you to use children classes, which is also good for our design here.

    Well, I think that’s all for now… Boo!

    by Errietta Kostala at November 02, 2013 09:30 PM

    October 27, 2013

    erry's blog

    Protected: On Gaining Confidence

    This post is password protected. To view it please enter your password below:

    by Errietta Kostala at October 27, 2013 02:40 PM

    September 30, 2013

    erry's blog

    Experimenting with jquery mobile

    I recently checked out jquery mobile, and I was quite impressed. It’s there to help you make a mobile user experience, but what impressed me more was a few things it does in the background, and how easy it is to use.

    To begin with, if you just load the jquery mobile CSS and script:

    <link rel="stylesheet" 
    href="//code.jquery.com/mobile/1.3.2/jquery.mobile-1.3.2.min.css" />
    <script src="//code.jquery.com/jquery-1.9.1.min.js"></script>
    <script src="//code.jquery.com/mobile/1.3.2/jquery.mobile-1.3.2.min.js"></script>
    

    You’re already getting some of its functionality on your website, without doing anything else. For example, if you use links in your website, you will now notice that they will all automatically use AJAX and give the user a sleek loading animation.
    Also, some of your elements will be styled, naturally. For example, checkboxes will automagically become mobile-friendly:

    <label>
        <input type="checkbox" /> I'm a checkbox!
     </label>
    

    As mentioned earlier, the jquery mobile syntax was surprisingly easy to use. Let’s look at some examples:

    Dialogs

    You can designate any other page to open as a dialog, by giving your link a data-rel=”dialog” attribute. Personally, I think having it in its own page is cleaner, anyway.

    <a href="/page" data-rel="dialog">Open dialog</a>
    

    What isn’t immediately obvious, though, is that your target page needs some specific HTML markup to work, too. It should look like this:

    <div data-role="dialog">
        <div data-role="header" data-theme="d">
            <h1>Dialog Header</h1>
        </div>
    
        <div data-role="content">
            Dialog content.
            <a href="dialog/index.html" data-role="button" data-rel="back" data-theme="b">
            Ok
            </a>
            <a href="dialog/index.html" data-role="button" data-rel="back" data-theme="c">
            Cancel
            </a>
        </div>
    </div>
    

    So, your container has to be data-role=”dialog”, then data-role=”header” and data-role=”content” designate the dialog header and content. data-rel=”back” on a link will make it a close button.
    The data-theme=”" attribute is optional, but it can be used to style various components. The themeroller lets you create several “swatches” for your theme, with different colors, but the defaults are mentioned in the documentation.

    Collapsibles/Accordions

    The way to make a collapsible is straight forward:

    <div data-role="collapsible">
        <h1>Click on me!</h1>
        <p>
            Collapsible content.
        </p>
    </div>
    

    The only special markup there is data-role=”collapsible”. The heading (which can be any heading from h1 to h6) automatically becomes the collapsible toggle.
    If you want to make an accordion instead, just put your collapsible in a parent with data-role=”collapsible-set” like so:

    <div data-role="collapsible-set">
            <div data-role="collapsible">
                <h1>Collapsible 1</h1>
                <p>
                    Collapsible 1 content.
                </p>
            </div>
    
            <div data-role="collapsible">
                <h1>Collapsible 2</h1>
                <p>
                    Collapsible 2 content.
                </p>
            </div>
        </div>
    </div>
    

    That’s all it takes!

    Switches

    Another thing that you may want to do, is provide switches instead of regular checkboxes. To do this, just make a select with the attribute data-role=”slider”, and give it two options.

    <label>
        <p>
            Turn notifications:
        </p>
        <select  data-role="slider">
            <option value="off">Off</option>
            <option value="on" selected>On</option>
        </select>
    </label>
    

    I made the default state on, by just giving my on option the selected attribute here.

    Headers/footers

    You can make a cool header/footer for your page, by using data-role:

    <div data-role="header" >
      <h1>Header!</h1>
    </div>
    
    <div data-role="footer" >
        <h1>Fixed Footer!</h1>
        Blah blah
    </div>
    

    And you can also use data-position=’fixed’ to make them stick in place.

    Tables

    Tables can either ‘reflow’ or change the number of columns when the screen size changes.

    Reflow

    Reflow is the default mode for a responsive table. and it means that the table columns will change to a stacked presentation on a small screen.

    <table data-role="table" id="movie-table" 
    data-mode="reflow" class="ui-responsive table-stroke">
      <thead>
        <tr>
          <th data-priority="1">Student ID</th>
          <th data-priority="persist">Student Name</th>
          <th data-priority="2">Year</th>
          <th data-priority="3">Mark</th>
        </tr>
      </thead>
      <tbody>
        <tr>
          <th>1</th>
          <td>Errietta Kostala</td>
          <td>2</td>
          <td>100%</td>
        </tr>
        <tr>
          <th>1</th>
          <td>Errietta Kostala</td>
          <td>2</td>
          <td>100%</td>
        </tr>
      </tbody>
    </table>
    

    data-mode=”reflow” is the key here. ui-responsive changes the table to a normal horizontal column view when the screen beocmes big again, and table-stroke just adds a border in the table.

    Column Toggle

    Column toggle means certain columns get hidden when the screen becomes small. This is where htis data-priority and ‘persist’ that you saw above is actually used. The one with data-priority=’persist’ will always stay, then columns will stay depending on how big their priority is. So the one with priority = 4 will be the first to go, and the one with 1 will be the last. Just change data-mode in the table above to be columntoggle to do this; Your page will also automatically acquire a button which allows the user to select which columns are displayed in this mode.

    <table data-role="table" id="movie-table"
    data-mode="columntoggle" class="ui-responsive table-stroke">
      <thead>
        <tr>
          <th data-priority="2">Student ID</th>
          <th data-priority="persist">Student Name</th>
          <th data-priority="3">Year</th>
          <th data-priority="1">Mark</th>
        </tr>
      </thead>
      <tbody>
        <tr>
          <th>1</th>
          <td>Errietta Kostala</td>
          <td>2</td>
          <td>100%</td>
        </tr>
        <tr>
          <th>1</th>
          <td>Errietta Kostala</td>
          <td>2</td>
          <td>100%</td>
        </tr>
      </tbody>
    </table>
    

    Sliders

    Just use the normal <input type=”range”> syntax.

    <label >Slider:
      <input type="range" data-highlight="true" min="0" max="10" step=".1" value="5">
    </label>
    

    data-highlight can give it a highlight:

    <label >Slider:
        <input type="range" data-highlight="true" min="0" max="10" step=".1" value="5">
    </label>
    

    Put two sliders within data-role=’rangeslider’ for a range slider.

    <div data-role="rangeslider">
        <label for="range-1a">Rangeslider:</label>
        <input type="range" name="range-1a" id="range-1a" min="0" max="100" value="40">
        <label for="range-1b">Rangeslider:</label>
        <input type="range" name="range-1b" id="range-1b" min="0" max="100" value="80">
    </div>
    

    Overall, jquery mobile is at least a good start for building a mobile gui, if it’s not suitable for a full solution. Also, as said earlier, it can be themed to your application’s needs, so you’re not stuck with the default look!

    Until next post, bye bye!

    by Errietta Kostala at September 30, 2013 04:42 PM

    September 02, 2013

    erry's blog

    Jquery Infinite Scroll & Backend

    Hello!

    The newest thing that’s caught my attention has been a jquery infinite scroll plugin. Infinite scroll plugins like that allow the page to populate with new content as the user scrolls down, instead of having to go through next page/previous page buttons. I believe this to be an enhanement in usability and a good idea to employ in a web application. The particular website includes a wordpress plugin as well, but I will show you the jquery plugin.

    Although the examples just read text content from a database, you can use them with any HTML element. Images, video, anything!

    If you want to read up in the plugin’s documentation, it’s available in the github.
    Most options aren’t required, however. These are the options that you have to use:

    /* Call this where your dynamic entries
       will be. */
    $('#posts').infinitescroll({
        /* The CSS selector of your regular next/previous page
        navigation. This will be hidden when the script 
       loads */
        navSelector  : "#pages", 
        /* The CSS selector of the 'next page' link */
        nextSelector : "#next",
        /* The CSS selector of each item that will be
           appended while scrolling (e.g. your posts)*/
        itemSelector : ".post",
    });
    

    Note: The plugin also expects a specific URL format, but it’s pretty flexible: something like /page/2, /page=2 etc will work. However, you need to increase pages by one like that (instead of just giving the server the id of the next record it should load, like page=5 for the second 5 items). I used to do it lazily like that, but it didn’t work out for me ;)

    With a configuration like that, your HTML should be something like that:

    <div id="posts">
        <div id="pages">
          <a href="/page/1">Previous Page</a> <a href="/page/3" id="next">Next Page</a>
        </div>
        <div class="post">
          First post content
        </div>
        <div class="post">
          Second post
        </div>
        <!-- MOAR !-->
    </div>
    

    Now, of course you have to write a backend for it, of course. (Or modify your current backend). I actually wrote two examples, in PHP and Catalyst

    A PHP Example

    <?php
        $mysqli = new mysqli("host", "username", "password", "database");
     
        # Number of entries to load per page.
        $limit = 5;
    
        # Get the page number from GET, or set it to 0.
        $index = isset ( $_GET['page'] ) && $_GET['page'] ? (int) $_GET['page'] : 0;
    
        $page = $index * $limit;
    
        # Get the total number of posts. We need this to know when we've reached the last
        # page.
        # You probably want to do some error handling here, if mysql queries fail.
        $statement = $mysqli->prepare ("SELECT COUNT(*) FROM posts");
        $statement->execute();
    
        $statement->bind_result($count);
        $statement->fetch();
        #The result is now in the $count variable.
    
        $statement->close();
    
        # Get the posts in that 'page'
        $statement = $mysqli->prepare 
        ("SELECT post_title, post_content FROM posts LIMIT ?, ?");
        $statement->bind_param('ii', $page, $limit);
        $statement->execute();
    
        $result = $statement->get_result();
    
        # And start outputting HTML...
    
        echo '<div id="posts">';
    
        while ( $row = $result->fetch_array() ) {
           # And output one .post per database row :)
            echo '<div class="post">
              <h1>' . $row['post_title'] . '</h1>        
              <p>' . $row['post_content'] . '</p>
            </div>';
        }
    
        $statement->close();
    
        echo '</div>'; # #posts
    
        echo '<div id="pages">';
    
        if ($page != 0) {
            echo '<a href="?page=' . ($index-1) . '">Previous Page</a> ';
        } if ($page + $limit < $count) {
            echo '<a id="next" href="?page=' . ($index+1) . '">Next Page</a>';
        }
    
        echo "</div>";
    
        # Output the JS too
        echo "
        <script src='jquery-1.10.2.min.js'></script>
        <script src='jquery.infinitescroll.min.js'></script>
     
        <script type='text/javascript'>
        $(document).ready(function() {
            $('#posts').infinitescroll({
                debug: true,
                navSelector  : '#pages',
                nextSelector : '#next',
                itemSelector : '.post',
    
            });
        });
        </script>";
    
        $mysqli->close();
    ?>
    

    And now you see how to implement the plugin in your PHP backend!

    A Catalyst Example

    I assume here that you use DBIx::Class in Model::DB and View::TT as your view. You can use other things but you’ll have to change the corresponding parts ;)
    Oh, and that your controller Test::Controller::Post.

    package Test::Controller::Post;
    use Moose;
    use namespace::autoclean;
    
    BEGIN { extends 'Catalyst::Controller' }
    
    # We'll make a base method for every valid link in /post.
    # Here we'll just find our post resultset.
    
    sub base :Chained('/') :PathPart('post') :CaptureArgs(0) {
        my ($self, $c) = @_;
    
        my $posts_rs = $c->model('DB::Post');
        $c->stash->{posts_rs} = $posts_rs;
    }
    
    # Handle /page/[argument]
    # The client will load for example /page/5, and our backend
    # will get it in the $page variable 
    
    sub page :Chained('base') :PathPart('page') :CaptureArgs(1) {
        my ($self, $c, $page) = @_;
    
        my $posts_rs = $c->stash->{posts_rs};
    
        # DBIx makes this so simple
    
        my $result_rs = $posts_rs->search(
            undef,
            {
                page => $page,
                rows => 5
            }
        );
    
        # Get the required data that our template will need to render.
        # The current page, first and last page, previous and next page,
        # And of course, the rows in the result set (posts)
    
        my $pager = $result_rs->pager;
        $c->stash->{current_page} = $pager->current_page;
        $c->stash->{first_page} = $pager->first_page;
        $c->stash->{last_page} = $pager->last_page;
    
        $c->stash->{previous_page} = $pager->previous_page;
        $c->stash->{next_page} = $pager->next_page;
    
        my @posts = $result_rs->all;
    
        if (!scalar @posts) {
            $c->detach('/default');
        }
    
        $c->stash->{posts} = \@posts;
    }
    
    # This will just choose the template for /page/NN
    
    sub view_page :Chained('page') :PathPart('') :Args(0) {
        my ($self, $c) = @_;
    
        $c->stash->{template} = 'post.tt';
    }
    
    # And finally, if someone just goes to /post, redirect them to the first page.
    
    sub index :Chained('base') :PathPart('') :Args(0) {
        my ( $self, $c ) = @_;
    
        $c->response->redirect($c->uri_for('/post/page/1'));
    }
    
    

    Now you just need a template for it:

    <div id="posts">
        [% FOREACH post IN posts %]
            <div class='post'>
                <h1>[% post.post_title | html %]</h1>
    
                <p>
                    [% post.post_content | html %]
                </p>
    
            </div>
        [% END %]
    </div>
    
    <div id="pages">
        [% IF current_page != first_page %]
            <a href="[% c.uri_for ("/post/page/${previous_page}") %]">Previous Page</a>
        [% END %]
        [% IF current_page != last_page %]
            <a id="next" href="[% c.uri_for ("/post/page/${next_page}") %]">Next Page</a>
        [% END %]
    </div>
     <script src='[% c.uri_for ("/static/jquery-1.10.2.min.js") %]'></script>
    
     <script src='[% c.uri_for ("/static/jquery.infinitescroll.min.js") %]'></script>
     
     <script type="text/javascript">
        $(document).ready(function() {
            $('#posts').infinitescroll({
                debug: true,
                navSelector  : "#pages",
                nextSelector : "#next",
                itemSelector : ".post",
    
            });
        });
    </script>
    
    

    And now you should have a controller and view for it :]
    Note that I’m still learning catalyst, so may not be the best solution. I like any improvement comments!

    Until next time, bye bye.

    by Errietta Kostala at September 02, 2013 06:09 PM

    August 17, 2013

    erry's blog

    Developing my solution to npower’s developer challenge

    About

    At some point in June 2013, I received an email from one of our tutors that let us know about npower’s developer challenge. The challenge was to create a working prototype of an online application that would run in desktop, mobile or tablet and help UK consumers view and control their energy use. It seemed interesting, and to be honest I liked the thought of winning first prize, so I thought I would create an application and enter.

    The final application can be seen in this page

    First Steps

    The first thing I did in developing this application was reading the expectations and specifications and having a look at the dummy data provided by npower. There were 3 XML files – one had data on device wattage, another had power tariffs for each region and the other had energy consumption data for several postcodes. After inspecting the XML files, the next thing to do, was of course to be able to read and manipulate them – I wrote some functions to search XML files per attributes and their values that would be useful to me. I also created a sample web page that I tested them on. At this point, I was able to find the energy consumption of a specific postcode, and also compare it with the second XML file to get the cost of said consumption.

    Building the UI – Energy consumption per postcode

    Of course, I needed a prettier way to present this Information to the user. Enter Google Charts, a javascript API that allows you to chart data in many different ways. I used a column chart to present how much energy was used each month of the year.

    Desktop view
    1

    The problem was that although the chart looked pretty, it didn’t fit well in small screens. In order to make the app fully responsive, I reduced the number of months of data shown depending on the screen resolution. Additionally, I changed the type of chart used depending on the screen orientation: if the width was greater than the height (desktop screens and landscape orientation) I used the vertical column chart, while on a portrait orientation, I used the horizontal bar chart.

    ‘Landscape’ view
    2

    ‘Portrait’ view
    3

    Building the UI- device wattage

    For this part of the application, I wanted to present the various devices that exist in a household along with the percent each device used. I thought that the best way to do this would be a pie chart.

    4

    The user could add devices from a select element and the pie chart was populated with that information.


    5

    I later improved the functionality by setting my own colours for the various pie ‘slices’ so that I could display a legend to the user in a different part of the page, which also allowed them to remove devices (slices).

    6

    Additionally, when the user added more than a number of devices ( that depended on the screen size ), smaller ones would be grouped under ‘other’. If you clicked ‘other’, you would see the remaining devices


    7

    8

    Using External APIs

    Having done about everything I could with these dummy APIs I wanted to do a little more. I searched data.gov.uk for relevant things and I found some energy use per middle layer super output area (MSOA) data. Unfortunately, most people don’t know what their MSOA is, so I had to find an API that mapped postcodes to MSOA. At first, I used Map It, which was great help, but I later switched to other government sources I eventually found. I was inspired to use maps similar to the ones in that tool though: I wanted to present the MSOA area to the user so they had an idea on how big/small it was and which places it included. I found MSOA boundary data in another government site – that data wasn’t in a format I could use though, so I had to import it into a program and then export it to JSON, which was perfectly usable.

    We can see the boundaries of the MSOA illustrated in blue in the map

    9

    Another problem I faced was that while the postcode-to-MSOA database I had found worked for England and Wales postcodes, it didn’t quite work for Scottish ones, since Scotland used a different kind of zone. I had to search in yet two other government sites to get the equivalent information for Scotland, but fortunately it had a similar format with the other databases, so it was easy enough to implement.

    Generally, the challenge here was finding the right sources to use, and then adapting them to what the user was expected to know ( their input ) and to what the user was expecting to see ( output that made sense ot them ).

    In a more detailed view of the APIs used, I use MLSOA and LLSOA electricity and gas estimates to get the energy use per MSOA, ONSPD May 2013 and ONSPD May 2011 from the office of national statistics to convert postcodes into MSOAs for England and Wales, Middle layer super output area boundaries 2011 for England and Wales (Full Extend) from the same site, SIMD 2012 postcode lookup and Data Zone Lookup to convert postcodes into MSOAs for Scotland and the Scottish MSOA boundaries from Scottish Neighbourhood Statistics

    Even more detailed information, including links to the resources I used can be seen in my application’s documentation

    Of course, the energy use data itself, was displayed using another bar chart:

    Here we see Google Charts in action again
    10
    Having to search for all these APIs was something entirely new for me, and a good experience.

    Conclusion

    In general, irrespective of the results (which are yet to be announced, but I will post an update once they are), I learned a lot from developing this application – from javascript tricks and google chart APIs to how to hunt for data and the right APIs to use in an application that needs them

    by Errietta Kostala at August 17, 2013 10:15 PM

    August 16, 2013

    RichiH's blog

    Release Critical Bug report for Week 33

    One more for DebConf.

    The UDD bugs interface currently knows about the following release critical bugs:

    • In Total: 1547
      • Affecting Jessie: 1059 That's the number we need to get down to zero before the release. They can be split in two big categories:
        • Affecting Jessie and unstable: 857 Those need someone to find a fix, or to finish the work to upload a fix to unstable:
          • 61 bugs are tagged 'patch'. Please help by reviewing the patches, and (if you are a DD) by uploading them.
          • 44 bugs are marked as done, but still affect unstable. This can happen due to missing builds on some architectures, for example. Help investigate!
          • 752 bugs are neither tagged patch, nor marked done. Help make a first step towards resolution!
        • Affecting Jessie only: 202 Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
          • 0 bugs are in packages that are unblocked by the release team.
          • 202 bugs are in packages that are not unblocked.

    How do we compare to the Squeeze release cycle?

    Week Squeeze Wheezy Diff
    43 284 (213+71) 468 (332+136) +184 (+119/+65)
    44 261 (201+60) 408 (265+143) +147 (+64/+83)
    45 261 (205+56) 425 (291+134) +164 (+86/+78)
    46 271 (200+71) 401 (258+143) +130 (+58/+72)
    47 283 (209+74) 366 (221+145) +83 (+12/+71)
    48 256 (177+79) 378 (230+148) +122 (+53/+69)
    49 256 (180+76) 360 (216+155) +104 (+36/+79)
    50 204 (148+56) 339 (195+144) +135 (+47/+90)
    51 178 (124+54) 323 (190+133) +145 (+66/+79)
    52 115 (78+37) 289 (190+99) +174 (+112/+62)
    1 93 (60+33) 287 (171+116) +194 (+111/+83)
    2 82 (46+36) 271 (162+109) +189 (+116/+73)
    3 25 (15+10) 249 (165+84) +224 (+150/+74)
    4 14 (8+6) 244 (176+68) +230 (+168/+62)
    5 2 (0+2) 224 (132+92) +222 (+132/+90)
    6 release! 212 (129+83) +212 (+129/+83)
    7 release+1 194 (128+66) +194 (+128/+66)
    8 release+2 206 (144+62) +206 (+144/+62)
    9 release+3 174 (105+69) +174 (+105/+69)
    10 release+4 120 (72+48) +120 (+72/+48)
    11 release+5 115 (74+41) +115 (+74/+41)
    12 release+6 93 (47+46) +93 (+47/+46)
    13 release+7 50 (24+26) +50 (+24/+26)
    14 release+8 51 (32+19) +51 (+32/+19)
    15 release+9 39 (32+7) +39 (+32/+7)
    16 release+10 20 (12+8) +20 (+12/+8)
    17 release+11 24 (19+5) +24 (+19/+5)
    18 release+12 2 (2+0) +2 (+2/+0)

    Graphical overview of bug stats thanks to azhag:

    by Richard &#x27;RichiH&#x27; Hartmann at August 16, 2013 12:28 PM

    August 15, 2013

    RichiH's blog

    DebConf13 group photo

    Group photo and historic T-shirts for general consumption. Click through for full resolution.

    If you want to have all the JPGs and NEFs (raw files), feel free to grab them from this DebConf-local mirror. Those files still have the EXIF information intact to easen any post-processing done by people who can actually do this stuff. It would probably make sense to agree on a canonical version, crop, clean, and then put on numbers with names, etc.

    Group photo Historic T-shirts, in order

    by Richard &#x27;RichiH&#x27; Hartmann at August 15, 2013 09:58 PM

    August 12, 2013

    RichiH's blog

    DebConf13 I

    DebConf!

    The venue here at Le Camp is pretty much perfect. Short walking distances, organic layout of the buildings, and a stunning view of the lake. I would be hard pressed to think of other venues which could be as nice...

    After arriving Fri/Sat night at 0330, Saturday was spent setting up the Access points:

    Me prior to pull-ups on the roof beams Access Point in pillow cases

    This little green valve caused a power outage in the server room, messing with servers, and resetting several switches to old configs. Xtaran had a lot of fun as a result of this.

    The valve of evil. And water.

    The ikiwiki BoF on sunday was rather interesting. I will try to publish some notes from this BoF and the other Git-related ones towards the end of the week.

    The Gitify ALL the things BoF managed to fill the room from "full-ish" over "good thing we don't need the beamer and can use the space in front of the whitescreen" over "out of chairs" over "the chairs from other rooms won't fit any more" to the final state of "people stand around near the walls and in the doors". At a total of 54 people, turnout has been... unexpectedly high..

    The BoF started at 1130 and usually, slots are 45 minutes long. We extended our BoF into lunch time (I chose the slot just prior to lunch for precisely that reason) and finally finished at around 1245, i.e. 30 minutes late. After a quick show of hands on if there's interest in another BoF, I applied for and got the next slot, tomorrow at 11:30, once again in BoF room 1 and just before lunch. It's called Gitify EVEN MORE of the things and will expand on use cases and best practices. At a guess, we will focus on managing configurations and photos in default and complex situations.

    Sadly, neither of those BoF are taped.

    Afterwards, I had a chance to sit down with Lucas Nussbaum to talk over some points regarding the Debian Trademark Team.

    Finally, during Why Debian should (or should not) make systemd the default (it probably should not, but that's a different story), my kernel panicked. 3.9 and 3.10 have been less than ideal on my new X1 Carbon, but this was the first panic. I was not even done cussing and Ben Hutchings suddenly appeared on my side, telling me that yes, this particular module (mei) has been causing issues recently.

    Kernel panic

    As a closing note, I am really enjoying my first DebConf. Great venue, great people, great content, winebeer and cheese party tonight, and I found out that we have a fire place and fire wood...

    by Richard &#x27;RichiH&#x27; Hartmann at August 12, 2013 05:20 PM

    August 09, 2013

    RichiH's blog

    Release Critical Bug report for Week 32

    As a more or less random data point for DebConf13, here's the current bug stats for Debian.

    Hope to see you there, and if you want to attend the BoF Gitify ALL the things or the talk Gitify your life you can do so on site, via IRC in #vcs-home on OFTC, or simply follow the live streams.

    If you see my in person, I have two crates of Scheider Aventinus with me and don't plan on bringing home any full bottles.

    The UDD bugs interface currently knows about the following release critical bugs:

    • In Total: 1532
      • Affecting Jessie: 1019 That's the number we need to get down to zero before the release. They can be split in two big categories:
        • Affecting Jessie and unstable: 832 Those need someone to find a fix, or to finish the work to upload a fix to unstable:
          • 51 bugs are tagged 'patch'. Please help by reviewing the patches, and (if you are a DD) by uploading them.
          • 38 bugs are marked as done, but still affect unstable. This can happen due to missing builds on some architectures, for example. Help investigate!
          • 743 bugs are neither tagged patch, nor marked done. Help make a first step towards resolution!
        • Affecting Jessie only: 187 Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
          • 0 bugs are in packages that are unblocked by the release team.
          • 187 bugs are in packages that are not unblocked.

    Graphical overview of bug stats thanks to azhag:

    by Richard &#x27;RichiH&#x27; Hartmann at August 09, 2013 04:15 PM

    freenode staffblog

    Reminder: Keep your NickServ email up to date.

    If you’ve registered with NickServ within the last few years then you’ll have used an email address and we’ll have sent you a mail to verify it. That will probably be the last time you heard from us…

    …until you forget your password and find yourself unable to identify to your account. When that happens we can send an email (only to that same address) to verify your identify and reset your password.

    You aren’t stuck with the email you originally used though! We’d very strongly recommend you take 5 minutes to double check the set email address is current, especially in light of recent service closures. You don’t need access to your old inbox to change your registered email, just your NickServ password.

    To view the current state of your account, while identified type:

    /msg nickserv info

    If you’d like to then change the registered email address, first…

    /msg nickserv set email [email protected]

    … then check your email inbox. We’ll have sent you another email with instructions to verify this new address.

    Your email address is hidden from other users by default. You can ensure this by setting:

    /msg nickserv set hidemail on

    Thanks for using freenode!

    by Pricey at August 09, 2013 09:18 AM

    August 07, 2013

    erry's blog

    twitter bootstrap and angularjs directives

    The past few days, I’ve been experimenting with angularjs. It’s an awesome javascript MVC framework and I recommend checking it out if you haven’t!
    One of its features are directives which, among other things, allow you to replace html elements with a specific tag name or attribute name with a template and adjust their behaviour. This is nice if you have complex HTML structures in a website which repeat themselves.
    Another awesome thing I’ve already posted about is twitter bootstrap, so I thought I would make some angularjs directives for bootstrap syntax that I have difficulty remembering.

    Note: I recommend having some knowledge on twitter bootstrap and angularjs before reading this. Of course, the docs are always wonderful ;)

    To begin, let’s make sure we have all the needed things. Your page’s head should look something like this:

    <!DOCTYPE html>
    <html ng-app="directives">
        <head>
            <link rel="stylesheet" type="text/css" href="css/bootstrap.min.css" />
            <link rel="stylesheet" type="text/css" 
            href="css/bootstrap-responsive.min.css" />
        </head>
        <body ng-controller="myController">
    

    We’re loading the stylesheets for bootstrap here, as well as having the ‘directives’ module handle our app and the ‘myController’ controller throughout the whole page.

    And right before </body>, let’s load the scripts:

    <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.0.7/angular.min.js">
    </script>
    <script src="js/directives.js" type="text/javascript"></script>     
    <script src="//ajax.googleapis.com/ajax/libs/jquery/1.10.1/jquery.min.js">
    </script>
    <script src="js/bootstrap.js"></script>
    

    The scripts are Angularjs, our custom angularjs code (‘directives.js’) where we will code our directives, jquery, and bootstrap.

    Let’s also define our module in directives.js:

    function myController ($scope) {
    }
    var module = angular.module('directives', []);
    

    Now with that out of the way, let’s get on with the directives.

    Tabs

    The first one is an example from the angularjs website: tabs.

    This allows you to use the following syntax in your page:

    <tabs>
        <pane title="First pane">    
             Content here
        </pane>
        <pane title="Second pane">
             Second pane content here
        </pane>
    </tabs>
    

    The directive is as follows:

    module.directive('tabs', function() {
        return {
            restrict: 'E',
            transclude: true,
            scope: {},
            controller: function($scope, $element) {
                var panes = $scope.panes = [];
    
                $scope.select = function(pane) {
                    angular.forEach(panes, function(pane) {
                        pane.selected = false;
                    });
    
                    pane.selected = true;
                }
    
                this.addPane = function(pane) {
                    if (panes.length == 0)
                        $scope.select(pane);
    
                    panes.push(pane);
                }
            },
            template:
            '<div class="tabbable">' +
            '<ul class="nav nav-tabs">' +
            '<li ng-repeat="pane in panes" ng-class="{active:pane.selected}">'+
            '<a href="" ng-click="select(pane)">{{pane.title}}</a>' +
            '</li>' +
            '</ul>' +
            '<div class="tab-content" ng-transclude></div>' +
            '</div>',
            replace: true
        };
    });
    
    module.directive('pane', function() {
        return {
            require: '^tabs',
            restrict: 'E',
            transclude: true,
            scope: { title: '@' },
            link: function(scope, element, attrs, tabsCtrl) {
                tabsCtrl.addPane(scope);
            },
            template:
            '<div class="tab-pane" ng-class="{active: selected}" ng-transclude>' +
            '</div>',
            replace: true
        };
    });
    

    What’s special about these directives is that the pane children let the parent know when they are added (by calling addPane), and the tab controller has some code to give the ‘selected’ property to the selected pane every time.

    Media

    <div class="media">
    <a class="pull-left" href="#">
    <img class="media-object" src="image">
    </a>
    <div class="media-body">
    <h4 class="media-heading">heading</h4>
    <p>
    Content
    </p>
    </div>
    </div>
    

    This is not too much trouble, but I like this more:

    <media image="img/sample.png" heading="Heading here">
       Content
    </media>
    

    And this little directive….

    module.directive ('media', function() {
        return {
            restrict: 'E',
            transclude: true,
            scope: { heading: '@', image: '@' },
            template:
            '<div class="media">' +
                '<a class="pull-left" href="#">' +
                    '<img class="media-object" ng-src="{{ image }}">' +
                '</a>' +
                '<div class="media-body">' +
                    '<h4 class="media-heading">{{ heading }}</h4>' +
                    '<p ng-transclude>' +
                    '</p>' +
                '</div>' +
            '</div>',
            replace: true
        }
    });
    

    ..Will save time in the long run!

    The special things I did here was use ng-src= for {{ image }} instead of just src= to prevent the browser from trying to download {{ src }}. Also, ng-transclude will copy the content of my <media> tag into the element that has that attribute, so the content can be anything you wish.

    <media image="img/sample.png" heading="Heading here">
    Content
    <media image="img/sample.png" heading="Another heading">
    Hello!
    </media>
    </media>
    

    Works as expected!

    Modals

    Let’s look at what code for a modal might look like:

    <div class="modal hide fade">
    <div class="modal-header">
    <button type="button" class="close" data-dismiss="modal" aria-hidden="true">
    &times;
    </button>
    <h3>Modal title</h3>
    </div>
    <div id="modal-body" class="modal-body">
    <p>Content</p>
    </div>
    <div class="modal-footer">
    <a href="#" onclick="cancelFn()" class="btn">Cancel</a>
    <a href="#" onclick="okFn()" class="btn btn-primary">Save changes</a>
    </div>
    </div>
    

    That’s long to type every time you need it. Thankfully, We can make it look more like this:

    <modal ok-fn="destroyDialog()" cancel-fn="destroyDialog()"
    title="Modal Title" id="modal">
        Modal Content
    </modal>
    

    You can change ok-fn and cancel-fn of course.

    The directive is:

    module.directive('modal', function() {
        return {
            restrict: 'E',
            transclude: true,
            scope: { title:'@', okFn: '&', cancelFn: '&' },
            template:
            '<div class="modal hide fade">'
            +'<div class="modal-header">'
            +'<button type="button" class="close" '
            + 'data-dismiss="modal" aria-hidden="true">&times;</button>'
            +'<h3>{{ title }} </h3>'
            +'</div>'
            + '<div id="modal-body" class="modal-body">'
            + '<p ng-transclude></p>'
            + '</div>'
            + '<div class="modal-footer">'
            + '<a href="#" ng-click="cancelFn()" class="btn">Cancel</a>'
            + '<a href="#" ng-click="okFn()" class="btn btn-primary">Save changes</a>'
            + '</div>'
            + '</div>',
            replace: true
        }
    });
    

    The only ‘special’ things here are including &t;p ng-transclude> which will copy the innerHTML of our <modal> into that element, and ‘okFn: “&”, cancelFn: “&”‘ which means that these two functions will be called in the context of the parent controller.

    DestroyDialog is defined in the parent:

    function myController ($scope) {
        $scope.destroyDialog = function() {
            $("#modal").modal("hide");
        };
    }
    

    Navbars

    Navbars aren’t very complex, but the syntax can become, depending on what you want. Luckily, you can create a special ‘navbar’ attribute that makes a

      a navbar:

      <ul navbar title='Title here'>
          <li class='active'><a href='javascript:;'>Menu item 1</a></li>
          <li><a href='javascript:;'>Menu Item 2</a></li>
          <li><a href='javascript:;'>Menu item 3</a></li>
      </ul>
      

      The directive:

      module.directive('navbar', function() {
          return {
              restrict: 'A',
              transclude: true,
              scope: { title: '@' },
              template:
              '<div class="navbar">' +
                  '<div class="navbar-inner">' +
                      '<a class="brand" href="#">{{ title }} </a>' +
                      '<ul class="nav" ng-transclude>' +
                      '</ul>' +
                  '</div>' +
              '</div>',
              replace: true
          }
      });
      

      We’re using ‘A’ in ‘restrict’ for the first time, because we want it to search for an attribute, not element name. Because attributes are transcluded to the resulting element, you can give your navbar other classes, as normal:

      <ul navbar class='navbar-fixed-top navbar-inverse' title='Title here'>
      

      And of course, ng-transclude will copy our <lis> to the resulting element.

      Responsive navbars

      The syntax for responsive navbars is more complex. Let’s make a directive for these, as well.

      The directive to create this:

      <ul navbar-responsive class='navbar-fixed-top navbar-inverse' title='Title here'>
      

      Is as simple as this:

      module.directive ('navbarResponsive', function() {
          return {
              restrict: 'A',
              transclude: true,
              scope: { title: '@' },
              template:
              '<div class="navbar">' +
                  '<div class="navbar-inner">' +
                      '<div class="container">' +
                          '<a class="btn btn-navbar" data-toggle="collapse" ' +
                              ' data-target=".nav-collapse">' +
                              '<span class="icon-bar"></span>' +
                              '<span class="icon-bar"></span>' +
                              '<span class="icon-bar"></span>' +
                          '</a>' +
                          '<a class="brand" href="#">{{ title }}</a>' +
                          '<div class="nav-collapse collapse">' +
                              '<ul class="nav" ng-transclude>' +
                              '</ul>' +
                          '</div>' +
                      '</div>' +
                  '</div>' +
              '</div>',
              replace: true
          }
      });
      

      Carousel

      The original syntax for carousel is in the source of this page.

      Let’s improve it from that to this:

      <carousel id="carousel">
        <carousel-item>
          <div class="carousel-caption">
             <h4>First Thumbnail label</h4>
             <p>Content</p>
          </div>
          <img src="img/carousel1.png" />
        </carousel-item>
        <carousel-item>
         <div class="carousel-caption">
             <h4>Second Thumbnail label</h4>
             <p>Content</p>
          </div>
          <img src="img/carousel2.png" />
        </carousel-item>
      </carousel>
      

      Directive:

      module.directive ('carousel', function() {
          return {
              restrict: 'E',
              transclude: true,
              scope: { id: '@' },
              controller: function($scope, $element) {
                  var items = $scope.items = [];
                  $scope.selectedIndex = 0;
      
                  $scope.select = function (index) {
                      if ( index >= $scope.items.length || index < 0 ) {
                          return;
                      }
      
                      angular.forEach (items, function(item) {
                          item.selected = false;
                      });
      
                      items[index].selected = true;
                      $scope.selectedIndex = index;
                  }
      
                  this.addItem = function(item) {
                      items.push(item);
      
                      if (items.length == 1)
                          $scope.select (0);
                  }
              },
              template:
              '<div class="carousel slide">' +
                  '<ol class="carousel-indicators">' +
                      '<li ng-repeat="item in items" data-target="#{{id}}" '+
                       'data-slide-to="{{$index}}" ng-click="select($index)"'+
                       'ng-class="{active:item.selected}"></li>' +
                  '</ol>' +
                  '<div class="carousel-inner" ng-transclude>' +
                  '</div>' +
                  '<a class="carousel-control left" ' +
                  'href="#{{id}}" ng-click="select(selectedIndex-1)">&lsaquo;</a>' +
                  '<a class="carousel-control right" ' +
                  'href="#{{id}}" ng-click="select(selectedIndex+1)" >&rsaquo;</a>' +
              '</div>',
              replace: true
          };
      });
      
      module.directive('carouselItem', function() {
          return {
              require: '^carousel',
              restrict: 'E',
              transclude: true,
              scope: {  },
              link: function(scope, element, attrs, carouselCtrl) {
                  carouselCtrl.addItem(scope);
              },
              template:
              '<div class="item" ng-class="{active: selected}" ng-transclude>' +
              '</div>',
              replace: true
          };
      });
      
      
      

      The special magic here (in the carousel controller function) holds the selected carousel item. The indicators are generated for every carousel item, and also call the select function. That function is also called by the next and back buttons. The carousel items call addItem on the parent controller.

      And of course, don’t forget:

      $("#carousel").carousel();
      

      Is still needed for it to work

      Well, that’s all for now. Remember to check back later for more.. magic!

      A live demo of all this is here

    by Errietta Kostala at August 07, 2013 09:50 PM

    August 06, 2013

    RichiH's blog

    High security

    If you're interested in security, brogramming, and throwing a few decades of UNIX best practices overboard... There's an app for that!

    The GIF they are using is oddly fitting, if for an entirely different reason than what they probably had in mind.

    by Richard &#x27;RichiH&#x27; Hartmann at August 06, 2013 06:43 PM

    August 03, 2013

    RichiH's blog

    Death of a slowly demising platform

    Let's assume you have been running a platform for over a decade.

    Let's assume you used to be one of the cornerstones, one of the central pillars, maybe even part of the very foundation FLOSS was built upon.

    Let's assume you slept through several major shifts and stopped innovating a long time ago (along with your other large sister project, owned by the same people).

    Let's assume you realize all this.

    You then try and get your act together, work over your feature set, create a UI that's easy and pleasant to use, encourage collaboration, visualize important data, and otherwise revamp yourself, start innovating again and regain your status of a platform which really matters.

    Or you just do this and call it a day.

    by Richard &#x27;RichiH&#x27; Hartmann at August 03, 2013 10:45 AM

    July 31, 2013

    RichiH's blog

    [email protected]

    MIME-Version: 1.0
    Received: by 10.194.17.9 with HTTP; Wed, 31 Jul 2013 09:21:17 -0700 (PDT)
    Date: Wed, 31 Jul 2013 18:21:17 +0200
    Delivered-To: [email protected]
    Message-ID: <CAD77+gQboyX1nDcJAYr-5B9Yrnmb7Uus9Yb9TzAc=[email protected]>
    Subject: Test
    From: Richard Hartmann <[email protected]>
    To: [email protected]
    Content-Type: text/plain; charset=UTF-8
    
    -- 
    Richard
    
    

    by Richard &#x27;RichiH&#x27; Hartmann at July 31, 2013 09:36 PM

    vcsh v1.20130724

    vcsh has seen a lot of activity in recent times and today's release of vcsh 1.20130724 contains a lot of useful new features. In case you haven't tried vcsh yet, or not recently, now is a very good time to do so.

    vcsh has been trivial to use for at least a year now, but the documentation did a very poor job of exposing that fact. Which is why there's now a 30 second howto which will get you up to speed and commiting in no time. The rest of the docs have undergone a major rewrite, as well.

    Feature-wise, there's

    • vcsh pull, which pulls from all vcsh remotes
    • vcsh push, which pushes to all vcsh remotes
    • vcsh status, which lists the status of all files checked into all vcsh repositories
    • clone hooks, including one with the vcsh environment tore down. This allows you to operate on other repositories, e.g. clone another repository
    • $VCSH_GITIGNORE == none, which allows you to avoid writing gitignores automagically
    • comprehensive zsh completion, which will complete on pretty much anything you could wish for
    • several bug fixes, increased robustness, and more comments

    If vcsh pull, vcsh push, and vcsh status sound a bit like a poor man's myrepos (formerly know as mr), that's by design. On the one hand, this enables vcsh to handle pratically all aspects of configuration management. One tool for everything

    On the other hand, even if you use myrepos (like I still do and will continue to do), this gives you a quick and cheap way to operate on configuration repositories, only. Especially if you're in a hurry or on a cellular connection, you may not want to pull in all changes from all remotes. If the Linux kernel or other large projects are part of your normal myrepos setup, updating them all takes time and bandwidth. Contrary to that, configuration repositories tend to be lightweight. Updating those selectively simply makes sense.

    Debian unstable carries the current package, Homebrew should carry it soon. Arch AUR and Fedora are getting there. And as vcsh is written in POSIX shell, it does not need to be compiled but can be run directly from a git clone without the need for installation.

    Did I mention that this is the perfect time to try out vcsh? ;)

    by Richard &#x27;RichiH&#x27; Hartmann at July 31, 2013 09:11 PM

    July 26, 2013

    Geeknic blog

    Geeknic at Wissahickon Gorge Sunday 7/28/2013

    On Sunday, 7/28/2013 at 4pm, Geeknic will be gathering at the Valley Green Inn in the Wissahickon Gorge for an informal grilling and picnic get-together. There’s plenty of hiking throughout the gorge, and plenty of opportunities to go Geocaching. We’ll be bringing the essentials – hot dogs, burgers chips and soda, but feel free to bring some more supplies or a small donation.  We’re starting at 4pm and we’ll stick with it until we get tired of each other.

    You can find the Valley Green Inn here. There is parking nearby, and picnic spaces about 100-150 yards north up Forbidden Drive. Plenty of the Fosscon staff will be in attendance, if you have any questions or want to meet us beforehand!

    Feel free to ask questions in #geeknic or #fosscon on Freenode.

    by kyleyankan at July 26, 2013 07:09 PM

    July 22, 2013

    freenode staffblog

    Server hosting and trust

    For the purpose of disclosure we have had to make the difficult decision to discontinue a long-standing relationship with a server sponsor.

    As a freenode user you may be aware that our set-up is somewhat untraditional and differs from that of many other IRC networks; servers are sponsored by various companies and educational institutions across the globe and all our infrastructure is centrally managed by the freenode infrastructure team. Generally speaking we do not provide o:lines or other privileges to server sponsors. Whilst it is possible for a sponsor contact to also volunteer as a staffer on the network such recruitment is independent of any server hosting.

    Our staff are expected to work together closely and communication is key in any freenode relationship, be that with users, among staff or with sponsor contacts. It is important to us to be consistent in the way we provide support and apply policy and we expect all volunteers to be intimately familiar with our policies, procedures and philosophies — which in turn means that senior staff invest a lot of time in ensuring that any new recruits are given adequate support when getting to know the ins and outs of the network and what being a freenode volunteer entails.

    Unfortunately one of our server sponsors added an o:line for themselves on the server they sponsored and whilst we do not believe that this was done with any malicious intent, more through thoughtlessness/negligence and having forgotten the expectations set out on our “Hosting a Server” page we feel that we are unable to comfortably and confidently continue the relationship.

    Our number one priority has to be our target communities, the Free and Open Source Software communities that have chosen to make use of freenode in their internet activities.

    Whilst we do not believe and have no evidence to indicate that any user traffic or data has been compromised, we would of course encourage you to change your passwords if you feel that this would make you more comfortable in continuing to use our services.

    We can only apologise for this happening and we’d like to assure you that trust is incredibly important to us and that we are incredibly embarassed that this situation arose in the first place.

    As a result of this we have just replaced our SSL certificates, so if you notice that these have changed then this is the reason why.

    We will of course take this opportunity to remind all our sponsors of our expectations when it comes to providing services to freenode and our target communities.

    Again, we apologise for any inconvenience and we hope that any loss of trust in the network that may have resulted from this incidence can be restored and that your projects will continue to feel comfortable using the network in future.

     

     

    by christel at July 22, 2013 07:19 PM

    mrmist's blog

    Filter in the name of protection

    I think it’s shocking that one of the central pillars of the concept of the Internet, free access to all things, is casually eroded by David Cameron in the name of “protecting the children”. This is appalling. Whilst I’m sure that this will give some poor quality parents an illusion of online safety, saving them from what must surely be a terrible chore of actually having to care about what their children are doing for themselves, filtering traffic by default is a massive blow to online freedoms. This will not make things better. This paves the way for the government to more fully dictate how and what we should view on the Internet in the future – after all, if the technical filters are already in place, why not just increase them a nudge to filter out more content that the government deems “unsuitable”? And, of course, the elephant in the room is that those people who do not have these filters activated, who choose instead to maintain real access to the Internet, will have suspicion cast upon them.

    by Mrmist at July 22, 2013 07:27 AM

    July 17, 2013

    freenode staffblog

    Fosscon, an open source conference in Philadelphia PA, Saturday August 10th

    FOSSCON 2013 will be held on August 10th, 2013.  Several of our very own staff here at freenode will be attending this year and we are really looking forward to it.

    FOSSCON was spawned from the depths of freenode and this will be the 4th event so far.

    We are very excited about this year’s keynote speaker, Philadelphia’s own Jordan Miller, who leads a research team at The University of Pennsylvania. Jordan makes heavy use of open source software and is doing amazing work with 3D printing as it pertains to transplant organs.  http://www.upenn.edu/pennnews/news/penn-researchers-improve-living-tissues-3d- printed-vascular-networks-made-sugar.

    Listed below is a just a quick peek at some of our confirmed speakers and their topics:

    • Bhavani Shankar will be speaking on how to bring in new developers to open source projects.
    • Elizabeth Krumbach Joseph will be speaking on Open Source Systems Administration.
    • Corey Quinn will be speaking on configuration management with Salt.
    • Brent Saner will be speaking on Project.Phree, a wireless mesh project.
    • Dru Lavigne will be speaking on FreeNAS 9.1.
    • Jérôme Jacovella-St-Louis will be hosting a workshop on cross-platform development with the Ecere SDK.
    • John Ashmead will be speaking on the math and science of invisibility.
    • John Stumpo will be offering a workshop on the Challenges facing FOSS game projects.
    • Walt Mankowski will be speaking on Scientific Programming with NumPy and SciPy.
    • Chris Nehren will be speaking on bridging the gap between development and operations.
    • Christina Simmons will be speaking on starting and managing open source events/projects.
    • Hector Castro will be offering a hands-on workshop on the Riak database engine.
    • Dan Langille will be hosting a workshop on Bacula: The Networked Backup Open Source Solution

    If you haven’t registered yet, please do so here: https://www.wepay.com/events/fosscon-2013!  We’ve had such an awesome response so far and are so excited to see how far we can go this year! Invite your friends, your partners, your business associates, and everyone else you know!  We’ll see you soon!

    by JonathanD at July 17, 2013 09:51 PM

    July 16, 2013

    erry's blog

    Move an image around with arrow keys (request)

    I often get asked how one can move an HTML element around in a page, because of a project of mine where I used that technique. I decided I would blog about it. It’s actually pretty simple!

    First, load your image in your page, like normal.
    I used this wonderfully drawn stickman figure.

    You need to give it a unique id, so that you can easily ‘find’ it with JavaScript code. It also needs to be absolute positioned so that it can freely move inside your page. Something like this will do:

    <img src="stickman.png" id="stickman" style="position:absolute;top:0px;left:0px" />
    

    That’s all for the HTML code! Now you just need a bit of javascript:

    //bind an event when the user presses any key
    window.onkeydown = function (e) {
        if (!e) {
            e = window.event;
        }
        //The event object will either be passed
        //to the function, or available through
        //window.event in some browsers.
    
        var code = e.keyCode;
        //that's the code of the key that was pressed.
        //http://goo.gl/PsUij might be helpful for these.
    
        //find our stickman image
        var stickman = document.getElementById("stickman");
    
        //get the image's current top and left position.
        //stickman.style.top will find the top position out of our
        //style attribute; parseInt will turn it from for example '10px'
        //to '10'.
    
        var top = parseInt (stickman.style.top, 10);
        var left = parseInt (stickman.style.left, 10);
        
        //We'll now compare the code that we found above with
        //the code of the keys that we want. You can use a chart
        //like the one in http://goo.gl/PsUij to find the right codes,
        //or just press buttons and console.log it yourself.
    
        if ( code == 37 ) { //LEFT
    
        //time to actually move the image around. We will just modify
        //its style.top and style.left accordingly. If the user has pressed the
        //left button, we want our player to move closer to the beginning of the page,
        //so we'll reduce the 'left' value (which of course is the distance from '0' left)
        //by 10. You could use a different amount to make the image move less or more.
    
        //we're also doing some very basic boundary check to prevent
        //the image from getting out of the page.
    
            if ( left > 0 ) {
                stickman.style.left = left - 10 + 'px';
            }
        } else if ( code == 38 ) { //UP
            //if we pressed the up button, move the image up.
            if ( top > 0 ) {
                stickman.style.top = top - 10 + 'px';
            }
        } else if ( code == 39 ) { //RIGHT
            //move the image right. This time we're moving further away
            //from the screen, so we need to 'increase' the 'left' value.
            //the boundary check is also a little different, because we're
            //trying to figure out if the rightmost end of the image 
            //will have gone
            //further from our window width if we move it 10 pixels.
    
            if ( left+stickman.width+10 < window.innerWidth ) {
                stickman.style.left = left + 10 + 'px';
            }
        } else if ( code == 40 ) { //DOWN
            if ( top+stickman.height+10 < window.innerHeight ) {
                stickman.style.top = top + 10 +'px';
            }
        }
    }
    

    And yes, that is all! If you put that in an HTML file and run it, your little stickman will be running around happily!

    LIVE DEMO!

    by Errietta Kostala at July 16, 2013 08:15 PM

    July 15, 2013

    freenode staffblog

    New TLS/SSL Channel Modes & Webchat Features

    We’ve recently enabled some new functionality in our ircd to further help you manage your channels:

    Channel mode +S

    This ensures only users that have connected via TLS/SSL (and so have user mode +Z) are able to join; you can not /invite them through it. It will not prevent the use of the channel by any non-TLS/SSL users already present.

    Extended ban $z

    Documented in ‘/help extban’ for some time, this has also been enabled and matches all TLS/SSL users. Usage is similar to the ‘$a’ type (which matches all identified users) and could for example be set as ‘+q $~z’ to to quiet any users not connected over an ssl connection.

    Webchat

    WEBIRC has been enabled so that behind their hostmask, users can now be considered to be connecting from their real address. This means that a single ban format can apply to both direct connections and webchat connections.

    For example, a user connecting from 171.205.18.52 will still appear as ‘nickname!abcd1234@gateway/web/freenode/ip.171.205.18.52′ but ban masks of the form ‘*!*@171.205.18.52′ will match! This is now the most effective method of matching users using webchat but the realname and hexip username are still available.

    Although freenode’s webchat is available over SSL, the webchat’s localhost connection to the ircd is not SSL, so webchat users do not get user mode +Z. Webchat users will not be able to join a +S channel and will not match the $z extban, even if they are using webchat over SSL.

    Security considerations

    These channel modes can not guarantee secure communication in all cases; if you choose to rely on them, please understand what they can and can’t do, and what other security considerations there are.

    There are a variety of known security problems with SSL, and reasons why the +S mode may not guarantee transport security on freenode. Some of these are:

    • These modes may be unset by channel operators at any time, allowing non-TLS/SSL users to join, and the mode may subsequently be reapplied;
    • If network splits occur it may also be possible for users to bypass +S intentionally or by chance;
    • Clients may be compromised or malicious, or using a malicious shared host;
    • Clients may have traffic intercepted as part of a Man In The Middle (MITM) attack and then transparently forwarded via SSL, invisibly to channel users;
    • There may be issues with TLS/SSL itself in server or client configuration or architecture which compromise its ability to provide effective transport security at the network level (there have been several published attacks against SSL recently – see here).

    This is not an authoritative list, so before using +S as part of any channel which requires strong anonymity, please ensure you understand what it does and its drawbacks.

    There are other security tools you may want to look at – you may want to consider using client plugins that provide additional encryption or route your connection through Tor. Tor also allows you to create spurious traffic to hide real traffic patterns. freenode provides its own hidden Tor node which means you can trust this connection as much as you trust freenode. Your IRC traffic with freenode via Tor is end-to-end encrypted from your Tor client to our Tor node. It does not pass through any third party nodes in unencrypted form.

    Finally, unless you can trust everyone in a channel and are sure it is configured properly and you understand the other technical risks, do not rely on these channel modes exclusively. Security is generally layered; ensure you have good defense in depth and don’t rely on individual controls which may be a single point of failure.

    Using other websites or services via Tor

    Remember to always encrypt your traffic when using Tor as you have no control over who is running exit nodes and if they are doing traffic analysis on them. While your traffic to the exit node is encrypted and the ingress node can not read it, the exit node will always need to be able to remove Tor encryption. If your traffic is clear-text said exit node will be able to read it.

    by Pricey at July 15, 2013 06:34 PM

    June 30, 2013

    erry's blog

    Get rid of all the apps you’ve authorized on facebook

    So, you might have realised you authorized things you don’t even remember about years ago, and worse they’ve started emailing you, posting on your wall, or who knows what. You really want to delete ALL of them and start over, but there’s 190 of them, and all facebook has is a little ‘x’ button that you have to click over and over, so there’s no way you’re going to go through all that.
    Unfortunately, the only way to massively get rid of every application you’ve authorized is to ‘hack’ it. (Unless you delete your facebook account and re-create it I guess).

    Note: If you follow these instructions it will really delete all your facebook app data! Depending on the app, you might not be able to get your progress back when you re-authorize it! If you have something particularly precious that you really can’t afford to lose, I guess you have to remove the rest of them manually, or hack the JS code provided later to ignore its close button! I can’t take responsibility for what happens to you for automating facebook actions either (but hopefully nothing happens)

    There are 2 ways you can go around doing that. One way is to download a macro plugin for the browser of your choice, or for your operating system in general, and find out how to use it. Record a sequence of you going to the first ‘x’ button, clicking that, then clicking ‘revoke’, and configure your macro plug in to do that repeatedly, and hope that works.

    The other thing you can do is of course do some javascript magic! I always love that.

    Get a browser that has a decent developer console, like chrome. That’s also the only one I tested this on, so I recommend it. Now, find the ‘x’ button, right click it, and click inspect element.
    You should see this:

    <a class="_111 uiCloseButton uiCloseButtonSmall" href="#" role="button"
    aria-label="Remove Application"
    ajaxify="/ajax/settings/apps/delete_app.php?
    app_id=some_app_id" rel="async-post" title="Remove"></a>
    

    Take a note of the value of the ‘class’ attribute here. For me it was _111 uiCloseButton uiCloseButtonSmall and I’m not sure if it always stays the same, so it’s better to take a look for yourself and note down the classname that it is in your browser. Then click on the ‘x’ on one of your applications, and see the gui that comes out. There’s a ‘remove’ button, so right click it and click ‘inspect element’ you should see this:

    <input type="button" name="ok" value="Remove">
    

    You can see that its name attribute is ‘ok’. That shouldn’t change for you, but take a look in case it does.

    Now that we have the class of the x button and the name of the input button, it’s time for action! You see, all these buttons have a common attribute, which is either name or classname. so, open up your developer console (it should already be open if you inspected an element earlier) and go to the actual ‘console’ tab.

    Step 1:
    Copy and paste this code (Note: If your machine isn’t very good, this code might make it lag)
    If needed, replace _111 uiCloseButton uiCloseButtonSmall with the class name you noted earlier!

    var elems =
    document.getElementsByClassName('_111 uiCloseButton uiCloseButtonSmall');
    for ( var i = 0 ; i < elems.length; i++) {
        elems[i].click()
    }
    

    Step 2:
    Sit back for a few seconds, and watch dozens of popup boxes show up. After they look like they’ve finished popping up, run this code:
    If needed, replace ok with the name you noted earlier!

    var elems = document.getElementsByName('ok');
    for ( var i = 0 ; i < elems.length; i++) {
        elems[i].click()
    }
    

    You should notice that the popups start going away. Now, because facebook rate limits actions, you may have to run that code a few times (just paste it in the console again) until they all go away. If all goes well, they’ll eventually all close.

    Again, because facebook rate limits actions, depending on how many apps you have, you’ll have to keep repeating steps 1 and 2 until all your apps are gone ( it took me about 5 times with 180 apps). Still, it’s a lot faster than having to click 2*number_of_authorized apps buttons.
    After a few tries, all your apps will be gone. Congratulations, now be really careful with what you authorize from now on, because having to do this is an annoyance!

    If you want to know what the javascript does, it really just finds all the elements whose class attribute (or name attribute in the latter case) is a specific value, and calls their click event. Hacky but seemed to work!

    by Errietta Kostala at June 30, 2013 11:15 AM

    June 26, 2013

    RichiH's blog

    Too much security

    So, regarding my cry for help...

    I did get several replies and did more research on my own. The TL;DR up to now is "I have a fully functioning device with no input method and my data may well die on it":

    • The device is passphrase-protected and encrypted so I can't simply connect an USB cable and use MTP.
    • I can't connect a mouse or keyboard as LG, in their endless wisdom, didn't design the USB port with enough power in mind so it can't support USB OTG on its own.
    • Google then removed USB OTG support from the Nexus 4's kernel. It's not as if powered USB hubs existed so this is obviously the correct path of action.
    • While I can install new programs via Google Play, Android 4.0 and above prevents newly installed programs to start without user interaction.
    • LG points towards a third-party service for out-of-warranty repairs and as part of their Terms of Service, you have to forfeit all data as they "always update the software", i.e. they will prolly ship random other devices to you on a regular basis instead of what you sent in.
    • The Nexus 4 is running stock Android, locked bootloader and all

    The last two options I see are

    • Try to find a way to get a custom ROM onto the device with the help of USB cable and physical buttons only without destroying the encrypted data (yeah, right...)
    • Try and source a display so I can repair the device myself. But as not even ifixit.com offers a howto or parts... I suspect this may fail.

    And I can not even be reached under my normal number as I don't dare turning the device off and/or removing the SIM as that may prevent me from recovering with the running device, somehow.

    by Richard &#x27;RichiH&#x27; Hartmann at June 26, 2013 10:08 PM

    June 16, 2013

    erry's blog

    Bookmarklet to remove HTML5 form requirements

    So, you’ve made a form, and you’re using the hip and new HTML5 attributes for your elements, such as ‘pattern’ or ‘required’, or “input type=’email’” in order to have an easy client-side form verification method. It’s not secure, but it’s pretty nice UI. Except, what do you do after you’ve done that and need to test that your server-side verifications work? Your browser, if it’s any good, won’t allow you to submit an incorrect form anymore. You have to go into developer tools and remove these attributes, which is annoying.
    So, I made a few lines of javascript that remove every ‘pattern’ and ‘required’ attribute they find in your HTML code, and change every type=’email’, type=’url’ and type=’number’ to type=’text’

    var elems = document.getElementsByTagName("*");
    
    for (var i = 0; i < elems.length; i++) {
        var elem = elems[i];
        if (elem.getAttribute('required') !== null) {
            elem.removeAttribute('required');
        }
        if (elem.getAttribute('pattern') !== null) {
            elem.removeAttribute('pattern');
        }
        if (elem.type === 'email' || elem.type === 'number' || elem.type === 'url') {
            elem.type = 'text'
        }
    }
    
    void(0);
    

    If you compress it, you end up with a nice browser bookmarklet:

    javascript:var elems=document.getElementsByTagName("*");for(var i=0;i<elems.length;i++){var elem=elems[i];if(elem.getAttribute('required')!==null){elem.removeAttribute('required')}if(elem.getAttribute('pattern')!==null){elem.removeAttribute('pattern')}if(elem.type==='email'||elem.type==='number'||elem.type==='url'){elem.type='text'}}void(0);

    Better copyable version of the bookmarklet – sorry, wordpress ruins it

    Simply copy and paste (preferably from the ‘better copyable version’ so that there’s nothing in the HTML here that can break it) that and add a bookmark with that as the URL. Then you can just click on it when you need it!

    by Errietta Kostala at June 16, 2013 10:07 AM