Planet freenode

October 11, 2019

Pricey's blog

Azure DevOps permissions primer

I often join Azure DevOps projects some time after they were started and can almost guarantee I'll find... questionable... permissions have been applied.

If you search for e.g. "azure devops permissions" you'll get pages like this which only tell half the story, so here's a quick primer...

October 11, 2019 12:00 AM

September 15, 2019

erry's blog

Making Jenkins Behave 2: Electric Boogaloo

Jenkins
Jenkins being all formal

That’s right, as promised, I’m going to torture myself with Jenkins some more, this time with multi-branch pipelines!

If you missed it, I recently wrote a blog post in which I explained how to integrate Jenkins and Github with freestyle jobs. In that post, I stipulated that were I able to use Multi-Branch pipelines, my life would have been much easier. Well, it’s true. Sort of. Multi-branch pipelines, once you get them working are much, much better than freestyle jobs. As you may have guessed, the problem is the initial setup, because Jenkins has incredibly cryptic error messages. Thanks pals.

Anyway, I’ve figured it out so you don’t have to, so let’s get started!

The first thing you want to do is set up your github user account and repository, before you even touch Jenkins. Unlike last time, where the order of things didn’t matter , this time it’s extremely important if you want everything to go smoothly.

Navigate to your personal access tokens settings on Github. You want to create a token with the following permissions:
admin:org_hook, admin:repo_hook, repo, user:email
They’re a bit more powerful than last time, but that does mean that Jenkins can set up the repo hooks for you so you don’t have to.

Now go to your repository and create a Jenkinsfile. That’s right, we’re creating the CI/CD pipeline before we’ve even touched Jenkins at all. This is because if Jenkins doesn’t find a Jenkinsfile, it pretends that your credentials are wrong and sends you on a wild goose chase, even if they’re absolutely fine. So just pick one of the hello world examples – it really doesn’t matter what, and all they do is display a version string, but you really want a valid Jenkinsfile.

Now go to Jenkins and click create new job or new item, and select multi-branch pipeline. Coincidentally, if this is the first time you’ve ran Jenkins, you might get an infinite loading screen. If that happens, just turn it off and back on again.

  • Click the “add source” dropdown and select Github.
  • Within credentials, click ‘add’ and select ‘Jenkins’.
  • Keep Kind as ‘username with password.
  • As username, enter your github username
  • As password, enter the access token from earlier. If you’ve lost it, like I have, you can just regenerate it provided it’s not being used anywhere else.
  • Under repository https url enter the url of your repository, which can be a private repo.

Now if you click apply and save, your github repo should have a new webhook!

Edit: If it’s not been added automatically, go to Jenkins config -> global config -> github, and see the “by default” section, which will give you the URL, which you can manually create a webhook for on Github. It needs the “pull request” and “push” hooks.

If it’s worked so far, you may be tempted to click on “Scan repository now”. Bad idea. It won’t work, and it will confuse you.

What you ACTUALLY have to do, is commit a change to master (or whatever branch has the Jenkinsfile). If you do that, and wait a minute, it should automatically build!

Jenkins showing the branches

The commit on Github will also be updated to show success or failure.

Github commit updates
Github commit updates

You’ll also see PR status!

PR status is shown
PR status is shown

As you can see, Multi-branch pipelines are already much easier to work with! I just wish Jenkins weren’t so cryptic – I wasted way too much time thinking it wasn’t working because I either hadn’t given it enough permissions, or I thought scanning was supposed to work.

Hope you have found this informative! Maybe next time I’ll dive deeper into multi-branch pipelines and build something cool!

by Errietta Kostala at September 15, 2019 02:19 PM

September 01, 2019

Pricey's blog

Switching backups to Restic

I have used Duplicati uneasily for some time to back up my personal server which hosts Nextcloud and other bits.

September 01, 2019 12:00 AM

August 22, 2019

erry's blog

Making Jenkins and Github ACTUALLY integrate with each other

An alternative of the jenkins logo where the mascot is on fire

Introduction

You may need to build jenkins jobs when branches/PRs are made from within the repository – say, to run tests. You may also want to report on the test status when finished. And you may have found doing this quite frustrating. If these things are true, join me on a journey…

Introduction

For a work project, I needed to integrate Jenkins tests with Github repositories. Basically, we make pull requests from branches within the repository against master, and then merge them when review and testing passes. For running automated tests, we use Jenkins, and so we want to be able to trigger the jenkins job when a PR is created and also change the PR check status to passed or failed depending on the results.

Assumptions

There are many different ways of doing these things, so it’s quite important that I list my assumptions early on so that you don’t waste your time reading a blog post that doesn’t actually help you. These assumptions held true for the case that I wanted to solve, and your circumstances may well be different.

  • You don’t use Jenkins multibranch pipelines – if you do, certain things will be easier, but there still may be information here that helps you.
  • You want to build PRs from branches within the same repository (as is the case for most private/corporate projects) rather than PRs from forks (as is the case for most FOSS projects).

Requirements

  • 2GB of RAM or more, Jenkins is very memory hungry…
  • A Jenkins installation – if you’re like me and don’t want to experiment on your company’s actual Jenkins install, it may be smart to invest in a Linux box such as a DigitalOcean, AWS, Linode, or other cloud provider virtual machine.
    Since it needs to be able to receive Github webhooks, it has to be accessible from the Internet, or from wherever you’re hosting your Github Enterprise instance. Thus, you unfortunately most likely can’t get away with just playing with this on a local VM or docker container – sorry! But some of the cloud providers mentioned above have hourly pricing, so it shouldn’t cost you too much.
  • A Github project that you want to integrate with, duh!
  • A way to relax (trust me, you’ll need it…)
  • A lot of patience

Setup

Github Access

The first step is setting up Github settings in a way that allow Jenkins to update PR status. Technically, it doesn’t matter if you do this step or the next step first, but it’s part of laying down the groundwork for later. Unfortunately, as much as we try to group things that need to be done in Github and Jenkins separately, we’ll need to go between the two platforms quite a lot, so get ready for that.

Having said that, I’ll try to put as many things that are done on a single platform together as physically possible, so if something doesn’t make sense right away, don’t worry, it’ll all come together eventually. I hope.

Make a user with write access

First, make sure you have a user with write access to your repository that you can use for this. Don’t worry, we won’t end up allowing Jenkins to have full write access, but due to how Github works this is required. You can use an already existing user or make a new one and you can lock it down as much as you feel you need to to make sure it’s safe.

Create a personal access token

Personal access tokens allow access to the Github API by using a token instead of your username and password. In addition to this, they can be locked down to only have the permissions that are absolutely necessary. As such, this is one of the safest ways to integrate with any service.

  • Go to Github Settings (Accessible by clicking on your avatar on the top right)
  • Click on developer settings near the bottom.
  • Click on personal access tokens.
  • Click generate new token
  • For this, the only required scope is “repo:status“. This means that the token that you are generating can be used to update commit status, but can’t be used for anything else.
  • Save the token somewhere, you won’t be able to get it again. We’ll need it again later on in the post to set up Jenkins integration.

Jenkins plugins & setup

Now that that’s done, we need to do the groundwork to allow Github to trigger Jenkins builds, so let’s head over to our jenkins install for this next step.

For this task, you just need the Github plugin for Jenkins. This provides a webhook URL, which is <YOUR_JENKINS_INSTANCE_URL/github-webhook>. You can visit that URL manually to see if it works; if you see “Method POST required” then it’s already set up.

Integration

Now that you have Github access set up and the Jenkins plugin installed, it’s go time!

Create Jenkins job

If you already have the job you want to use, you can edit it so that it matches these settings instead.

  • Go to your Jenkins install
  • Click ‘create item
  • Click ‘freestyle project’ – once again we’re assuming you’re not using pipelines. If you are, then your life is much easier.
  • Tick github project and add your project URL
  • Add the repository in source code management – this requires either the credentials of a user with at least read-only access, or for your repository to be public.
  • Leave branches to build blank so you can build PR branches.
  • Select at least the ‘GitHub hook trigger for GITScm polling’
  • Add a build step that runs your tests; I’ll leave this one to you.
  • Click ‘save’ for now; we will need to make more edits later, but it’s good to get the base case working first.

Trigger Jenkins builds from Github

Remember the github-webhook URL from earlier? It’s that url’s time to shine. Time to add it as a webhook in our github repo, so that it triggers builds.

  • Go to settings within your github repo
  • Click on Webhooks
  • Put the payload URL as the URL from earlier, e.g. YOUR_JENKINS_INSTANCE_URL/<github-webhook>.
  • Select “send me everything“. I’m not sure it’s required but it really doesn’t matter.
  • Save the webhook.

Now, make a PR with at least one commit. If all goes well, you should see the job build in jenkins:

If it doesn’t go well, click ‘Github Hook Log‘ and hope that you can diagnose the problem. If not, you may need your patience and ways of relaxing from the Requirements section, plus a heavy dose of StackOverflow.

At this stage, github won’t report the job status. This is normal! We’ll fix this in the next step!

Reporting the job status

I’d advise taking a break at this point, because this is by far the most frustrating step.

Welcome back! Hopefully you’ve had a long meditation session/drink/whatever keeps you sane when doing really annoying things.

Now it’s time to get to the final and most frustrating part of the process – actually updating GH build status. Are you ready?

Creating a jenkins secret to store the token

Did you save the token earlier? No? Don’t worry, you can just make another. Now that you have a token, time to store it securely* in Jenkins.

* They’re encrypted but can be decrypted if someone has access to the jenkins instance. Hopefully, you’ve secured it, right?

  • Click credentials from the home page
  • Click on the credential store where you want to store it. If you have no idea what that means (I sure don’t), click on the ‘Jenkins’ store.
  • Click on the domain, such as “global credentials”
  • Click on “add credentials”
  • Kind is “secret text”
  • The secret is your token from earlier.
  • Select a meaningful id and description

Add the secret to your job

  • Go back to your jenkins job and click on ‘configure’, which is conveniently placed right next to the ‘delete job’ button, which you don’t want to accidentally click.
  • Within “Build environment”, check Use secret text(s) or file(s)
  • In bindings, select ‘secret text’. This should expand a new dialog.
  • In variable, type in the name of the environment variable you want to use, such as GH_TOKEN
  • Select “Specific Credentials” and select the id of the secret you made in the previous step

Report status to github

Now, it’s time for the true task! Report the status back to Github. While still on the same configure page, it’s time to tweak the build job slightly.

Put the following before running your test command, to set the status as pending:

export REPO_NAME='YOUR_REPO_NAME'
export JOB_NAME='YOUR_JOB_NAME'


curl "//api.GitHub.com/repos/$REPO_NAME/statuses/$GIT_COMMIT
      ?access_token=$GH_TOKEN" \
-H "Content-Type: application/json" \
-X POST \
-d "{
    \"state\": \"pending\",
    \"context\": \"jenkins/$REPO_NAME\",
    \"description\": \"Jenkins\",
    \"target_url\": \"//YOUR_JENKINS_URL/job/$JOB_NAME/$BUILD_NUMBER/console\"
}"

This will set your git commit to pending!

Now, because we’re not using pipelines, and because you don’t want the job to stop if it fails, you need to change your test command. So if your command is, say, bash test.sh, you need to change it thusly, so that you capture if it succeeds or not. This makes the assumption that your command will return a non-zero status if it fails, which is true for most test frameworks.

First export TEST_ERROR=0, we’ll use this to store the error if any

Change your command from bash test.sh to bash test.sh || TEST_ERROR=$?

Now, when you want to report the status, you can check if we caught an error or not:

if [ $TEST_ERROR -eq 0 ] ; then
    curl "//api.GitHub.com/repos/$REPO_NAME/statuses/$GIT_COMMIT
          ?access_token=$GH_TOKEN" \
    -H "Content-Type: application/json" \
    -X POST \
    -d "{
        \"state\": \"success\",
        \"context\": \"jenkins/$REPO_NAME\",
        \"description\": \"Jenkins\",
        \"target_url\": \"//YOUR_JENKINS_URL/job/$JOB_NAME/$BUILD_NUMBER/console\"
    }"
else
    curl "//api.GitHub.com/repos/$REPO_NAME/statuses/$GIT_COMMIT?access_token=$GH_TOKEN" \
    -H "Content-Type: application/json" \
    -X POST \
        -d "{
        \"state\": \"failure\",
        \"context\": \"jenkins/$REPO_NAME\",
        \"description\": \"Jenkins\",
        \"target_url\": \"//YOUR_JENKINS_URL/job/$JOB_NAME/$BUILD_NUMBER/console\"
    }"


    exit $TEST_ERROR
fi

This will run a hook to update to success or failure, depending on the value of $TEST_ERROR. It’ll also exit with a non-zero status if there was a failure before, telling jenkins to record the failure as well.

That’s it!

If you’ve done everything right, you should see the status reported to GH, either success or failure:

Congratulations! After much pain, you have a working integration.

Further steps

  • You probably want to look into jenkins multibranch pipelines so you can at the very least programmatically act on build status without having to write a hacky shell script.

Thanks

Thanks to the blog posts “adding a github webhook in your jenkins pipeline” and “how to update jenkins build status in github”, which were extremely helpful to me. Even though I couldn’t directly replicate what they did, my solution is really a Frankenstein’s monster-style stitch up of both of them, so yay for them.

Thanks for the wonderful folks at the jenkins artwork page for the image used as a featured image for this post. As much as I complain about jenkins, it’s probably one of the most powerful CI tools I’ve ever used.

And thank you for reading! Let me know if you want me to make myself suffer further by making a write-up about multi branch pipelines! I’m sure watching me attempt to write Groovy will be amusing to someone…

And apologies if this post seems more passive-aggressive than my usual style, but to my defense this was a real adventure…

by Errietta Kostala at August 22, 2019 12:37 PM

August 19, 2019

Pricey's blog

First thoughts on Zola

Zola1 is a static site generator in Rust.

Wanting to blog a bit more and having a passing interest in Rust, I figured I might as well rebuild my blog rather than actually write anything...

August 19, 2019 12:00 AM

August 12, 2019

Pricey's blog

Invoke-ASCmd Caches xmla?

tldr: Invoke-ASCmd caches xmla files somewhere. Always provide the absolute path to Invoke-ASCmd -InputFile.

August 12, 2019 12:00 AM

August 11, 2019

freenode staffblog

Matrix GDPR access request data overshare

Hi all,

You may already be aware that in the process of servicing a request for personal information under the GDPR, Matrix.org provided a user with a data dump that mistakenly included events that user had not been a party to. We suggest reading Matrix.org's writeup for more details.

On the morning of 2019-08-04 UTC we were notified by the recipient of the dump that the errant data included messages from freenode users and, in a spirit of transparency, felt it was important to keep you informed of any potential security issue concerning you.

We have reached out to Matrix.org's team in order to understand the impact of the issue, and they have assured us that all of these messages were to public channels whose administrations chose to make their histories publicly available.

If you have any questions, feel free to either track down a staffer in PM or email [email protected].

Thanks for using freenode.

by edk at August 11, 2019 06:52 PM

June 22, 2019

freenode staffblog

Moving webchat to Kiwi IRC

Hi all,

after years of providing our good old qwebirc based webchat, we are excited to announce that freenode is moving to a new Kiwi IRC based solution!

The change will occur during this weekend (June 22nd / 23rd).

Kiwi IRC is an extensible and modern webchat solution, making IRC a lot easier and more comfortable to use for both newcomers and long time users. In addition to a clean and friendly UI it supports translations into various languages, easier formatting and usage of emoji and advanced customization for power users.

Most existing links and bookmarks should continue to work, including sites embedding the freenode webchat; please do let us know if you are running into issues.

We would like to thank everybody who supported us during this migration, most of all Kiwi's developer, prawnsalad, who provided a huge amount of code, adaptations, options and testing that should ensure a smooth migration.

Along with this change, we will no longer apply gateway cloaks to users of our webchat, treating them the same as any other client. While channel operators will still be able to recognize them via the realname field, we strongly suggest that you carefully consider the impact on legitimate users and hope that you decide not to ban webchat users as a whole.

Please note that the old webchat will no longer be available after this migration.

Thank you for using freenode, via our new Kiwi webchat or any other client you prefer!

by Fuchs, ilbelkyr at June 22, 2019 08:42 PM

June 17, 2019

freenode staffblog

ircd-seven 1.1.8

Hi all,

We're preparing to release version 1.1.8 of ircd-seven and deploy it to the production network over the coming weeks.

This release incorporates a number of user-facing changes:

  • Monitor is restored to a usable state, and will be re-enabled.
  • Spam filtering can be opted-out of. Setting mode +u on yourself ( /umode +u or /mode yournick +u) will disable filtering for messages sent to you. Setting it on a channel will disable filtering for all messages sent to that channel.
  • /motd and /stats are no longer ratelimited unless directed at a specific server.

We're also introducing support for several IRCv3 features that may improve the experience on capable clients:

There's one more change that is not related to this release, but deserves mention: nearly two years ago, we developed an improvement to the +z channel mode, which sends messages that would have been blocked by +q or +m to channel operators instead. Our new version sends these messages to ops from the @-prefixed version of the channel:

:[email protected]/staff/spy.edk PRIVMSG @#test :I'm quieted

to make it easier for operators to distinguish between messages everyone can see and messages they can see due to +z.

This borrows the syntax from an existing feature, STATUSMSG, but is easy to tell apart from it, because only ops and voiced users can send to @channel normally.

We gated this behind a feature switch, and we've been waiting, largely passively, for client support to increase. It appears that everyone who wants to act on warnings has done so, and we'd like to commit to a date to enable it.

We'll be enabling this feature on the 31st of July 2019, UTC. If you op a channel that uses +z, please make sure your client handles it correctly. You can send test messages by using /msg @#channel test using a second opped connection for any channel where you have ops—your client should associate this message with #channel, and preferably distinguish it from normal messages in some way.

Thanks for using freenode, and I look forward to collaborating with many of you via a slightly less-antiquated medium.

by edk at June 17, 2019 10:15 AM

May 30, 2019

erry's blog

ENOUGH with the burndown charts!

Burndown chart [Source:: wikipedia]

I heard about a team being asked to provide burndown charts in their demos to stakeholders. My first reaction was: why!? In this blog post, I’m going to try to articulate why I believe burndown charts are often meaningless at best and harmful at worst, and why even if they work well for an engineering team it doesn’t make sense to show them to anybody outside the team.

But first: The counter argument

But, Errietta, burndown charts are useful! They help us know how much we can put in each sprint! If we don’t look at our velocity, how do we know if our sprint goals are achievable?

Sure, in theory. Except, I’ve never seen them used that way in my entire career. Plus, multiple times, the conversation has gone somewhat like this:

PM: We didn’t meet our velocity of 25 last sprint

Engineer: That’s because we put too much in the sprint, and stuff was still only as far as QA when we finished. Maybe we should try putting in fewer points?

P: I’m going to put in 30 points, I’m sure if we all pull together we can achieve it

E: ….

Now if your team actually uses them only between yourselves, and doesn’t show them to PMs and shareholders, they could actually be useful. You could use them to tell the PM what they want this week is way too much ;)

Having said that, if you are using sprint points and burndown charts in your team, and it works for you and you’re happy, hey, go for it, and maybe write a comment saying how you manage that. This is just an opinion piece after all.

Why sharing burndown charts is meaningless

Going back to the situation described at the beginning at the post, your stakeholders want to know what you have developed in a sprint and give feedback on it. If it’s done correctly, your stakeholders should be your users, or at the very least people who represent your users (such as sales, customer services, and so on). It’s perfectly reasonable to give them a list of what you planned to achieve and which of those things you actually achieved, but what’s the point of showing them the burndown chart?

Scenario 1

Imagine a situation where you had 30 points in a sprint and achieved 25, and on top of that in good time, so the burndown chart moved quite quickly. You have a pretty good looking graph, and you show that to stakeholders. Congratulations, you did lots of work!

However, those points represent complexity, not importance or impact. So what if the 5 points you didn’t manage to achieve were high impact things and the 25 points weren’t as high impact? Well, now you have a meaningless chart nobody cares about, and a high importance feature that wasn’t done.

Scenario 2

Having noticed that counting points is meaningless, you switch to charts that don’t show a number of points, and only show a percentage, or maybe you decide to point everything a ‘1’.

Congratulations, now your chart is even more useless to your stakeholders than before. Literally all you are showing is that you moved quickly and turned around a predefined number of tasks in two weeks. There’s still no measure of impact, and now there’s not even a measure of complexity. The chart is as useful as showing the number of days you spent in a sprint. Hey everyone, let’s have a chart starting from 10 and going to 0. Burndown chart done.

Don’t get me wrong. Again, this chart could be extremely helpful used between engineers in your team, but in my opinion it’s worthless to stakeholders.

Why burndown charts are harmful

I could write a whole book on this, but I think I’ll settle with writing a bullet pointed list of some of the worst things I’ve seen happen

  • As I said before, management sometimes use them to tell people to do more work in the same amount of time without fixing any of the things that slow people down. Yes, even in good companies.
  • Teams sometimes brag about the number of points they achieved and compete with other teams. Yes, really. This is completely meaningless because one team’s 5 is different from another team’s 5. “Why is X startup doing 100 points a sprint and we only do 70!?”. As good a question as “why does Jane have apples and Chloe have oranges?”

    Even if sprint points represented exactly the same thing for everyone, some teams have fewer people than others and some teams have more complicated tasks than others. It’s probably easier to develop and test 8 1-point tickets than it is to develop and test a single 8-point ticket.
  • They cultivate burnout culture, which is why I want to call them burnout charts. In general, agile sprints appear to be the same as running one athletic sprint after the other with no rest. Software development is a marathon, not a series of back-to-back sprints. There needs to be time for relaxing, unwinding, and learning.
  • People often don’t look at why a burndown chart is ‘bad’ anyway. What if it’s bad because you don’t have time to write automated tests (because you have to do 50 story points in a sprint) and now QAs have to manually test everything? What if it’s bad because the release process is long-winded and complicated?

    If your organisation is actually addressing those problems, good – it means that you can use a ‘bad’ burndown chart as evidence that something needs to be done. But if they continue having the same practices, the whole thing is meaningless.

What to do instead?

Let engineers use burndown charts if they like them, but don’t force them to share them with product and stakeholders. If they do share them, and they tell you that there is a bottleneck somewhere, listen to them and address their concerns.

Allow some breathing room in sprints for learning and experimentation – having a feature half a day later won’t matter that much when your engineers use their free time to learn something that helps enhance user experience and therefore get you closer to your KPIs.

Remember the timescales you are dealing with are extremely miniscule in the long run. I’ve heard (on more than one occasion!) management say that 3 weeks for something is too long, but 2 weeks would be ok. Software is turning over faster than ever before! If there’s a tight deadline that the business’s livelihood depends on, sure, the extra week could be disastrous.

But when we’re talking about day-to-day work, and estimates at that, the difference adds up to basically nothing in the long run. This is extremely frustrating when it comes from a successful, profitable, mid-sized or large organisation; you’re not going to go out of business if a piece of work costs an engineer’s salary for two more weeks

In conclusion

I’m not against agile. I’m not against sprint points. I’m not even against burndown charts! However, I see many examples from people in the industry saying that those things are used as a way to rush out broken code, or even as a stick to beat people with. Thankfully, that hasn’t been my personal experience, but I believe it when people say it happens.

I think that software engineers are, in general, very smart and hard working, and don’t need so many eyes on how fast they are performing. It’s completely the wrong metric. We should be measuring what impact our work has, even if that only means looking at how much profit has increased by.

Once again, if your team is genuinely happy with those practices, if they help them, great. Otherwise, you may want to stop and think about what you’re actually hoping to achieve by obsessing over those numbers so much, and if it’s even worth the effort spent on doing so.

by Errietta Kostala at May 30, 2019 08:52 PM

May 20, 2019

erry's blog

Introduction to the fastapi python framework

I have been working on a new python-based API recently, and on a colleague’s suggestion we decided to use fastapi as our framework.

Fastapi is a python-based framework which encourages documentation using Pydantic and OpenAPI (formerly Swagger), fast development and deployment with Docker, and easy tests thanks to the Starlette framework, which it is based on.

It provides many goodies such as automatic OpenAPI validation and documentation without adding loads of unneeded bloat. In my opinion, it’s a good balance between not providing any built-in features and providing too many.

Getting started

Install fastapi, and a ASGI server such as uvicorn:

* Make sure you’re using python 3.6.7+; if pip and python give you a version of python 2 you may have to use pip3 and python3. Alternatively check out my post on getting started with python.

pip install fastapi uvicorn

And add the good old “hello world” in the main.py file:

from fastapi import FastAPI

app = FastAPI()


@app.get("/")
def home():
    return {"Hello": "World"}

Running for development

Then to run for development, you can run uvicorn main:app --reload

That’s all you have to do for a simple server! You can now check //localhost:8000/ to see the “homepage”. Also, as you can see, the JSON responses “just work”! You also get Swagger UI at //localhost:8000/docs ‘for free’.

Validation

As mentioned, it’s easy to validate data (and to generate the Swagger documentation for the accepted data formats). Simply add the Query import from fastapi, then use it to force validation:

from fastapi import FastAPI, Query


@app.get('/user')
async def user(
    *,
    user_id: int = Query(..., title="The ID of the user to get", gt=0)
):
  return { 'user_id': user_id }

The first parameter, ..., is the default value, provided if the user does not provide a value. If set to None, there is no default and the parameter is optional. In order to have no default and for the parameter to be mandatory, Ellipsis, or ... is used instead.

If you run this code, you’ll automatically see the update on swagger UI:

Swagger UI allows you to see the new /user route and request it with a specific user id

If you type in any user id, you’ll see that it automatically executes the request for you, for example http://localhost:8000/user?user_id=1. In the page, you can just see the user id echoed back!

If you want to use path parameters instead (so that it’s /user/1, all you have to do is import and use Path instead of Query. You can also combine the two

Post routes

If you had a POST route, you just define the inputs like so

@app.post('/user/update')
async def update_user(
    *,
    user_id: int,
    really_update: int = Query(...)
):
    pass

You can see in this case user_id is just defined as an int without Query or Path; that means it’ll be in the POST request body. If you’re accepting more complex data structures, such as JSON data, you should look into request models.

Request and Response Models

You can document and declare the request and response models down to the detail with Pydantic models. This not only allows you to have automatic OpenAPI documentation for all your models, but also validates both the request and response models to ensure that any POST data that comes in is correct, and also that the data returned conforms to the model.

Simply declare your model like so:

from pydantic import BaseModel


class User(BaseModel):
    id:: int
    name: str
    email: str

Then, if you want to have a user model as input, you can do this:

async def update_user(*, user: User):
    pass

Or if you want to use it as output:

@app.get('/user')
async def user(
    *,
    user_id: int = Query(..., title="The ID of the user to get", gt=0),
    response_model=User
):
  my_user = get_user(user_id)
  return my_user

Routing and breaking up bigger APIs

You can use APIRouter to break apart your api into routes. For example, I’ve got this in my API app/routers/v1/__init__.py

from fastapi import APIRouter
from .user import router as user_router


router = APIRouter()

router.include_router(
    user_router,
    prefix='/user',
    tags=['users'],
)

Then you can use the users code from above in app/routers/v1/user.py – just import APIRouter and use @router.get('/') instead of @app.get('/user'). It’ll automatically route to /user/ because the route is relative to the prefix.

from fastapi import APIRouter

router = APIRouter()


@router.get('/')
async def user(
    *,
    user_id: int = Query(..., title="The ID of the user to get", gt=0),
    response_model=User
):
  my_user = get_user(user_id)
  return my_user

Finally, to use all your v1 routers in your app just edit main.py to this:

from fastapi import FastAPI
from app.routers import v1


app = FastAPI()

app.include_router(
    v1.router,
    prefix="/api/v1"
)

You can chain routers as much as you want in this way, allowing you to break up big applications and have versioned APIs

Dockerizing and Deploying

One of the things the author of fastapi has made surprisingly easy is Dockerizing! A default Dockerfile is 2 lines!

FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7

COPY ./app /app

Want to dockerize for development with auto reload? This is the secret recipe I used in a compose file:

version: "3"
services:
  test-api:
    build: ..
    entrypoint: '/start-reload.sh'
    ports:
        - 8080:80
    volumes:
        - ./:/app

This will mount the current directory as app and will automatically reload on any changes. You might also want to use app/app instead for bigger apps.

Helpful links

All of this information came from the fastapi website, which has great documentation and I encourage you to read. Additionally, the author is very active and helpful on Gitter!

Conclusion

That’s it for now – I hope this guide has been helpful and that you enjoy using fastapi as much as I do.

by Errietta Kostala at May 20, 2019 04:21 PM

May 04, 2019

freenode staffblog

freenode Next Gen Tor Hidden Service

Over the last few years, the Tor Project has developed a new Tor Hidden Services protocol. It has a few improvements over the previous version, including better cryptography using SHA3 and ed25519.

We've added a new Tor Hidden Service address to our instructions for connecting to freenode via Tor that uses the new protocol. The new address is

ajnvpgl6prmkb7yktvue6im5wiedlz2w32uhcwaamdiecdrfpwwgnlqd.onion

If you're using a recent version of Tor (0.3.5 or newer) to connect to freenode, you should be able to use the new service by changing from the old address to the new one in your client configuration. The old address will continue to work for the forseeable future, but is likely to be deprecated eventually as the Tor ecosystem changes.

by dax at May 04, 2019 12:00 AM

January 26, 2019

erry's blog

Build APIs with node, Lambda & Serverless

This is a talk I did at London Node User Group on January 23rd, 2019.

You can watch the talk below or on youtube.

by Errietta Kostala at January 26, 2019 05:28 PM

January 19, 2019

erry's blog

Porting errietta.me to nuxt.js

My personal website is one of the places where I can easily experiment, and it has been written and rewritten a few times. Having said that, laziness meant that it was stuck on its previous PHP-laravel implementation for a while.

PHP was one of the first things I learned as a developer, and at the time I was learning some frameworks at University and thought Laravel was a decent way of organising my code.

In the recent years I’ve been experimenting with newer technologies like node.js, and I believe server-side rendering of Single Page Apps gives you the best of both worlds in a way: the advantages in development speed, service workers, and frameworks for organising frontend code of SPAs and the SEO advantages of a server-rendered app.

In this case, I chose vue.js as it’s a lightweight and simple to use framework, and in particular nuxt.js which allows you to do Server-side rendering (SSR) with Vue.js and a server framework of choice such as express.

Essentially, nuxt (vue SSR) is good old vue.js with the first page load being rendered on the server, so that search engines can still parse the content. Additionally, it’s easy to implement API routes to execute server-side code with node.js. In this article, I’ll explain how I achieved this for this website.

Note that I do recommend looking into the basics of vue.js and node.js before reading this guide, as I will assume knowledge on them.

Creating the app

The first thing to do is to install create-nuxt-app (npm install -g create-nuxt-app). Then, we can use this to get the boilerplate for our app:

npx create-nuxt-app errietta.me-nuxt

If you observe the created directory, you’ll see… A lot of boilerplate!

The list of directories created by nuxt

Not all of those directories are needed, but it’s worth keeping them around until you know what you’ll need for your project.

The nuxt.js directories

This is a quick introduction to the directories created by nuxt.js; feel free to skip this section if it’s not interesting to you.

  • assets contains files such as svgs and images that are loaded by webpack’s file-loader. This means you can require them within your javascript code.
    • This is in contrast to the static directory, from which files will just be served by express as static files.
  • components contains all the parts that make up a page, such as a Logo component, or a Paragraph component, or a BlogPost component. These are like the building blocks for your pages.
  • layouts This is a way to create a wrapper or multiple wrappers around your page content, so that you can have common content around your page such as headers, footers, navbars, and so on.
  • middleware is a way to run code before your pages are rendered. You may want to check if a user is authenticated, for example.
  • pages is where the main code of your pages go. pages can fetch data via AJAX and load components. This is code that will be executed by both the client and server, so if you have code you only want to execute on the server, you want it accessible by an HTTP api that your pages code can use.
  • plugins is a directory to include third party plugins.
  • server is your express (or other framework) server code. You can just use the framework as normal, provided you keep the code that nuxt.js auto-injects, which takes care of the SSR for you. This is where you can create your APIs that will be accessed by either the server on page load or through AJAX by your SPA.
  • store contains code for your VUEX store.

Developing the application

Now that we know what the directories are about, it’s finally time to get our hands dirty. In the metaphorical sense, of course. Please don’t type with dirty hands…

For my pages, it was mostly static content, so it was easy going. For example, index.vue is the default home page, and I started by standard vue.js code:

<template>
  <div>
    <h1>Hello world!</h1>
     Welcome to my website.
  </div>
</template>

<script>
export default {
  name: 'Index',
  components: { },
  props: { },
  asyncData( { } ) { }
  computed: { }
}
</script>

<style scoped>
h1 {
  font-size: 200%;
}
</style>

Nothing out of the ordinary so far. However, my website’s homepage continues the excerpts of my latest blog posts, and in order to retrieve that I want to parse my blog’s RSS. I wanted to do the actual work on the node.js server side, so that I can replace it by a proper API call later on if I wish. In this case, I could call this code from both client and server side, but there are cases that you want server side only code such as database connections, so this is a good example of it.

What I mean by that is that the code to actually fetch the blog posts will always be executed by the node server. The SPA will simply load data from that server, either on load when it’s rendered, or by an HTTP request as explained earlier. Hopefully the below diagram explains what happens:

# Case 1: initial page load

VUE SSR (node) --HTTP--> express api (node) --> blog RSS

# Case 2: content loaded by HTTP on SPA

VUE (browser)  --HTTP--> express api (node) --> blog RSS

You can therefore see that no matter the entry to the app, the business logic only exists and is executed on the node layer.

My next step here was to create server/api/posts.js to create said business logic:

const Parser = require('rss-parser')

const postsApi = async (req, res) => {
  const posts =  await parser.parseURL('https://www.errietta.me/blog/feed')
  // transform data to a uniform format for my api
  return res.json(posts)
}

module.exports = postsApi

This is a simplified version, I have some more logic here if you’re curious: https://github.com/errietta/errietta.me-nuxt/blob/master/server/api/posts.js but it doesn’t matter; the main point is that the retrieval of the data is done on nodejs.

Now, we can add this route to server/index.js before the app.use(nuxt.render) line. This is because the nuxt middleware will handle all routes that are not handled by other middleware.

  app.use('/api/posts', require('./api/posts'))
  app.use(nuxt.render)

Now we simply need to call this API in the asyncData section of our page. asyncData is a nuxt function that is executed both on rendering the content on the server side and client side. We already have asyncData in index.vue so we can modify it.

  asyncData({ $axios }) {
    return $axios.get('api/posts').then(res => ({ posts: res.data })).catch((e) => {
      // eslint-disable-next-line
      console.log(e)
      return { posts: [] }
    })
  },

Note that we are getting $axios from the object passed to the function. This is the nuxt.js axios plugin, which has special configuration to work with vue. It works the same way as a regular axios instance, so as you can see we are performing an HTTP request to our API. Note that this will perform an HTTP request no matter if it’s done through the server or client, but because the server-side request is done locally it should not impact performance.

So far, the posts are not used anywhere. Let’s make a posts component in components/Posts.vue

<template>
  <div>
    <div v-for="item in posts" :key="item.id">
      <h4>
        <a :href="item.link">
          {{ item.title }}
        </a>
      </h4>
      <p v-html="item.content" />
    </div>
  </div>
</template>

<script>
export default {
  name: 'Posts',
  props: {
    posts: {
      type: Array,
      default: () => []
    }
  }
}
</script>

Note: be careful with v-html. In this case I somewhat trust my blog’s RSS, but otherwise this can be a field day for someone wanting to play around with XSS attacks.

Either way, this is just a straight forward component that shows the post excerpt and a link to the post. All we have to do is include it in index.vue

Register the component:

import Posts from '../components/Posts.vue'

export default {
  name: 'Index',
  components: {
    'app-posts': Posts
  },
  ...
}

Then use it:

<template>
  <div>
    <h1>Hello world!</h1>
     Welcome to my website.
  </div>
  <div>
    <h2>blog posts</h2>
    <app-posts :posts="posts" />
</template>

Note that we are binding posts to the posts property which comes from asyncData. It works the exact same way as data!

If everything is done correctly you should be able to see the blog posts on your page. Congratulations, you’ve made your vue SSR app!

Additionally, if you “view source” you will notice that the blog posts are already rendered on page load. No client side JS is actually required here, thanks to SSR!

Deploying

As I mentioned, my website was an existing platform deployed on digital ocean behind nginx. Plus, it hosts my wordpress blog on the same domain, and I didn’t want to change either. Therefore, the node app had to sit behind nginx. It’s a good idea to have some sort of proxy in front of express anyway.

I also use the node process manager, pm2 to background and fork the express process to use more than one cpu.

This is my ecosystem.config.js

module.exports = {
  apps: [{
    name: 'errietta.me',
    script: 'server/index.js',

    instances: 0,
    autorestart: true,
    watch: false,
    max_memory_restart: '1G',
    env: {
      NODE_ENV: 'production',
      HOST: '127.0.0.1',
      API_URL: 'https://www.errietta.me'
    }
  }]
}

I was terrified about getting Ubuntu 14.04 to autostart my node app on system startup; I’d have to mess around with upstart or systemd and I’ve never been particularly good at those things. However, pm2 to the rescue! All I had to do was to run pm2 startup and follow the instructions and voila! My node app would auto start.

I also followed this tutorial to set up the nginx reverse proxy. As mentioned, I did want to preserve the php configuration of my blog, which ended up being surprisingly easy.

First step was to register the node.js upstream:


upstream my_nodejs_upstream {
    server 127.0.0.1:3000;
    keepalive 64;
}

I edited my already existing server { } block and I kept this section:

server {
    # other config ....
    location /blog {
        index index.php index.html index.htm;

        if (-f $request_filename) {
            break;
        }

        if (-d $request_filename) {
            break;
        }

        location ~ \.php$ {
            fastcgi_pass 127.0.0.1:9000;
            fastcgi_index index.php;
            fastcgi_param SCRIPT_FILENAME /path/to/$fastcgi_script_name;
            include fastcgi_params;
        }

        rewrite ^(.+)$ /blog/index.php?q=$1 last;
        error_page  404  = /blog/index.php?q=$uri;
    }

Before adding the section to proxy everything else to node:

   location / {
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header Host $http_host;
      proxy_set_header X-NginX-Proxy true;
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "upgrade";
      proxy_max_temp_file_size 0;
      proxy_pass http://my_nodejs_upstream/;
      proxy_redirect off;
      proxy_read_timeout 240s;
    }

And, we’re done – I had replaced my site’s php back-end with a node.js vue SSR backend and preserved the PHP parts I still needed, quite easily.

I hope you enjoyed this account of how I initiated, developed, and deployed my website to its new vue-ssr home, and that it proves helpful in some way.

If you want to see the finished version, check out my github!

by Errietta Kostala at January 19, 2019 08:25 PM

December 21, 2018

Deedra's blog

kj7cmd: ham radio call sign etc

So i finally managed to get my ham radio license in november after several years of wanting to do it but not having the patience or ability to really study well. I was talking to an old roommate and she told me about hamtestonline. Apparrently hamtestonline is a really great way for studying for your ham radio exams as i had call to find out. You are allowed to miss 9 questions on the technission exam, I managed to only miss 5!

Next goal is to get my general license. I’ve already paid for the course just need to actually do the work. Hopefully i’d say by march at the latest i’ll have my general license.

I’m not sure if i’ll get my extra license or not. I will probably do it just to say i did it, however it’s not really a priority. Most of the ham bands can be gotten with your technission and general licenses at this point chris told me that getting that one is more to say you’ve done the work then anything else.

I wonder how in the world we can run an HF antenna in an apartment…..lol!

by deedra at December 21, 2018 10:24 PM

tarch (the new talking arch)

I have no real idea what’s going to happen with this project at this point even if it’s going to actually move forward. Currently I’d consider this project up in the air and a giant question mark weather it’s going to move forward. I got nailed with medical things so we’ll see what happens.

by deedra at December 21, 2018 09:55 PM

November 04, 2018

freenode staffblog

freenode #live 2018 is a wrap, thank YOU!

Wow. What an incredible weekend. I want to express my heartfelt thanks to all those who joined us for freenode #live this weekend. Whether you participated in person or tuned into the livestream.

We were fortunate enough to have an exceptionally inspiring and engaging speaker line-up - thank you so much for coming to inspire the community. Thank you also to our supportive sponsors who have enabled us to put this event on again, and of course the freenode volunteer team - both those who detached for a weekend to throw themselves into an entirely different type of volunteering and those who stayed behind to keep the network running and also doubled as livestream monitors, a special thank you to you all for your willingness to adapt and get stuck in. A huge thank you also to the venue staff, and to the AV technicians!

YOU are freenode.

I hope that the several calls to action in various talks will inspire and encourage as we jump from C (communicate, collaborate and create) to E (engage, educate and empower).

I am confident that we will do a more extensive recap when we have started to recover from the weekend, but before I nod off to sleep I want to do a huge shout-out of thanks to those of the freenode #live participants that hailed from underrepresented groups. In a time where the world is oftentimes unjust and outright scary it has been an incredibly inspiring and empowering experience to meet so many of you this weekend. I am under no illusion that there isn't a fight still to be had to ensure full inclusion and equality and I am incredibly happy that you form a growing and significant part of our community.

YOU are freenode also. Thank you.

Lost property?

Towards the end of day two a pair of glasses were found in the theatre and while a few announcements were made no one came forth to claim the glasses. In the event that you lost yours but would prefer to be able to see properly again, please do get in touch with us via e-mail or IRC and we will arrange for the glasses to be returned to you!

by christel at November 04, 2018 10:59 PM

October 01, 2018

freenode staffblog

Did you hear? freenode #live is coming to town!

A little over a month from now, the second freenode #live will take place at We The Curious in Bristol. With talks from many free and open source community leaders such as Leslie Hawthorn, VM Brasseur, Chris Lamb and Bradley Kuhn, freenode #live once again brings an international free software gathering to the South West.

Other exhibitors at this year’s event will be the returning main sponsor, Private Internet Access, as well as the Free Software Foundation, OpenSuSE, Minetest, Linux Journal and the Handshake Project.

Handshake is a new decentralized alternative DNS root, and for a short time before launch freenode users and members of the free and open source community are encouraged to sign up today at Handshake.org and receive free Handshake coins that can be used to purchase domain names when Handshake launches in the near future.

Tickets for the full event start at £15 (approximately USD $20, 18 Euro) but there is a free tier for those wanting to catch the talks and exhibition hall only. Bristol is well connected with Bristol International Airport (BRS) serving many European destinations, and the city is a few hours by train from London and Manchester.

Please note: This guest blog entry has been written by our sponsor Handshake.

by mattl from Handshake at October 01, 2018 04:47 PM

September 24, 2018

freenode staffblog

Spam filtering

Hi,

As most of you are aware, we've been experiencing significant spam over the past few weeks. As a result, we have decided to roll out a server-side spam filter. Unlike our current spam-mitigating techniques, this system applies to private messages and does not let the first matching line get through.

Various ethical concerns have been raised over the course of introducing this feature. They'll be addressed below. The short version, though, is that the system has various limitations built in designed to prevent operator abuse. Only a tiny bit of information can get out of the filters, and they do not have access to much information themselves, to the extent that we believe the obvious ways to abuse such a feature are impractical.

We've historically been reluctant to take steps like this, and we remain so, but we believe the disruption has reached such a level that this is necessary to allow the communities using freenode to collaborate effectively. The prior complement of anti-spam measures represents our preferred approach, and we intend to employ this only when they prove insufficient to minimize disruption.

For the technically inclined, you can view the changes here.

  • Could this be used to spy on users? Which filter a user matched is not reported to staff, only that one did. This limits the theoretical maximum rate of passive monitoring to one bit per message, far less than the information content of conversations.

    Recipients of private messages are not included in the line that filters match on, so staff cannot use spam filters to see who is talking to whom.

  • Could this be used to shadowban users? No. If a filter blocks a message, its sender is either disconnected from the network or sent an error message.

    Currently, the filter system is configured not to use the nick, username, or hostname for filter matching, so it can't discriminate against particular users at all.

The exact information filters "see" is as follows:

  • The type of message (PRIVMSG/NOTICE)
  • The target of the message, if that target is a channel. For private messages, filters can see that they are PMs but not who their target is.
  • Whether or not the sender is identified (but not their account name)
  • The full contents of the message

The code can be configured to filter on the [email protected] of the sender. We haven't enabled this, and have no current plans to, but this is subject to change should the nature of the spam demand it.

Filtering is always performed on the server originating a message, and inside the ircd process. This system will never cause a message to be distributed more widely than before.

Staff can, as always, answer your questions about this change, and we welcome constructive feedback. Private messages to staff are not subject to filtering.

by edk at September 24, 2018 05:47 PM

September 03, 2018

Deedra's blog

the new talking arch (tarch)

for many reasons that have come up in the last couple days mike and i decided to just fork talking arch. I’d rather not drag frustration and politics into a project I want to help start and we decided that since we want to add quite a few new features to talking arch as well as try and  keep a generic talking arch livecd for those who just want straight talking arch.

 

And so we’re forking it. we’ve decided to do several things i’ll mention below all of which we think will help those who need a standard talking arch livecd that talks but we want to create a special livecd with several things.

 

*we want to create 2 sets of livecds 1 which uses fenrir the new screenreader and 1 which uses speakup.

*in the long run we will probably have to move to fenrir but that’s quite a ways off i suspect.

*we want to create a livecd with many admin and rescue tools that  will assist those blind folks who  are system admins who need those features.

*We also want to add support to that same livecd to include other installers  so that say a user who needs a talking livecd but debian isn’t talking,  the user can use that admin type livecd to install debian voidlinux arch you get the idea.

 

We are greatly looking forward  to getting this project moving and hope that those who may use it will enjoy it. I’ll announce further things as we get things rolling. I suspect however the first step is to bring the generic livecd current so people dont have a livecd that’s a year old or more.

 

Stay tuned!

 

 

by deedra at September 03, 2018 04:12 AM

September 02, 2018

Deedra's blog

thoughts on server hosting and vpss

So last year when i origionally bought kittyrats.com I wanted to relearn some of my admin skills part of which was buying kittyrats.com and getting a vps to play with. We had seen something of scaleway’s offers they had at one time of  $3/month for a nice little arm server with a decent amount of disk and ram. I was also looking at prgmr.com as well as that’s where chris’s domain the-brannons.com has been hosted for many years.

 

Itried scaleway for a while and hated it too many accessibility problems on their site for 1 thing and many other things i hated about them.  I ended up going with and sticking with prgmr for many reasons the setup  was nice getting  voidlinux set up on it was easy for chris to do. having an out-of-band console via ssh is absolutely wonderful and the support and staff is incredibley wonderful to deal with. So, there is my recommendation so to speak. I have nothing but wonderful things to say about prgmr and well scaleway….the accessibility nightmare of the centurey and other bad things.

 

 

by deedra at September 02, 2018 08:24 AM

September 01, 2018

Deedra's blog

my views on marijuana recreational and medical

I’ve used marijunaa for several years now for medical reasons. I have a lot of issues that i’ll write about later but for now let’s say that i have severe chronic pain and vomiting and psych issues i dont think they’ve fully diagnosed. For me marijuana is a needed medication.

Doctors here dont deal with pain meds much any more and dont like giving them out for chronic pain. For me i refuse opiates for many reasons. The biggest of which is i dont like what they do to me plus they make me puke.

So medical marijuana for me it is. It controls the pain and nausia but it’s got the side effect that i’m psychologically adicted to it. What this means is that my body seems to essentially require that the levels be stable. when they’re not it causes my psych issues to spiral which makes me puke…..you get the idea.

As for recreational marijuana it’s now a thing i could take or leave. it’s a medicine and so essentially it’s no longer fun.

by deedra at September 01, 2018 08:07 AM

fenrir screenreader

I’ve not discussed this screenreader yet as it’s been a while since i’ve had a blog i’ve been happy with.

Fenrir is a user-space screenreader for linux. I love it as i think it’s a good replacement for speak considering speakup has some issues.

In reality all screenreaders have their issues but i prefer fenrir for many reasons. Mainly it’s not built into the kernel and i can use it with an x terminal and get around orca’s terminal bugs..

I think the biggest bug i’ve found in fenrir so far is one that i hope will be fixed before the next release.

r

by deedra at September 01, 2018 07:59 AM

kittyrats: part2

we have 2 kittyrats and definitly dont want more. JIngles and bastet are like our children for lack of better wording. Jingles is starting to have i guess what i’d call old age issues she gets really bad hairballs that cause her to spew a lot. so off to the vet on the 10th, and maybe we can find out what’s wrong with the jingle kitty.

Bastet on the other hand is a healthy girl but she’s got this weird fear of mike’s guide dog. She kind of freaks out a litle but she’s also getting curious and brave so we’l see what she does.

The kittyrats will have their own wordpress blog soon like me. They’ve got a older blog but it’s time to hear what they have to to say!

by deedra at September 01, 2018 07:56 AM

what’s a kittyrat?

everyone probably wonders by now what’s a kittyrat? chris and i started calling our cats kittyrats when we got jingles. Reason why is because she’s got a super long tail and long body and legs. The term kittyrat stuck because well it’s stupidly cute and fun.:P

We got a roommate a bit ago and now mike calls them kittyrats to!

by deedra at September 01, 2018 07:52 AM

talking arch thoughts and decisions

talking arch was a project created by chris several years ago it provides a way for users to have an easily accessible talking live cd for linux so they can install arch linux and possibley other things. I also know of many who use it as a rescue cd.

When chris stopped using archlinux the project was handed over to someone who took it on and has been maintaining it up till their x8664 machine went boom. Mike and i have decided to take over the project and either a, take it over or b fork it if we can’t take over the project. Decision soon either way because if i dont hear anything by monday i’ll fork.

by deedra at September 01, 2018 07:48 AM

freenode

i’ve been freenode staff off and on over the years as things have gone. I’ve been back for a bit over a year give or take and it’s been an interesting experience. Over all it’s been extremely enjoyable however and despite the spam the strange ones and the bad ones so to speak it’s been well worth coming back.

I’ve changed my schedule recently to cover US nights and early sleeping EU hours so hopefully we’ve got a bit more hands to help out when users need to get the help.

by deedra at September 01, 2018 07:44 AM

updates or something

i’ve had multiple blogs in the past and i’ve never really kept up witht hem for multiple reasons. This is one of those blogs where anything goes. I may discuss personal things open sources stuff and such and there will always be kittyrats!

by deedra at September 01, 2018 07:40 AM

August 06, 2018

freenode staffblog

Continued and persistent spambot attack and clarification

As you may be aware there has been a prolonged spambot attack directed at freenode (and other IRC networks) in recent weeks, targeting a number of individuals involved with freenode and the wider IRC communities. The freenode team, and people involved with the wider IRC communities, are working hard to mitigate and reduce the spam that hits your community channels.

The spam content has changed in the last few days and while I am extremely glad that the attacks appear to no longer focus on members of the volunteer team and no longer involve libellous and false statements relating to these volunteers, we feel we should provide some clarification on some of the claims that are being made in the current spamwave relating to freenode and its involvement in Handshake.

The current spambot attacks state that freenode is involved with an 'ICO scam' relating to the Handshake project. Most freenode volunteers have involvement with one or several FOSS projects, often projects that use the freenode network as part of their communications toolbox. Handshake is no different in this regard, as it is a project that I have been involved with. I am deeply sorry to those affected by the spam, to freenode and to Handshake that spammers have chosen to use my involvement as a further platform to attack the freenode communities, and now also Handshake.

Prior to announcement, the Handshake project raised USD 10.2 million in funding from project supporters and the project made the decision to not only give a substantial amount of its coin supply to people and projects within the FOSS sphere but to also donate the USD 10.2 million (FIAT) to projects whose work the initial project contributors admire and/or rely upon. Like many projects within the FOSS world, Handshake has extensively used other free and open source software to build its codebase, and FOSS also lies at the foundations of the internet architecture that we rely upon day to day.

One of the projects Handshake identified as useful is indeed also freenode, which is on the pledgee list to receive a FIAT donation from the project. This donation will, among other things, contribute towards making the freenode #live conference bigger and better, and also to focus on some development work that has otherwise been on the back-burner. freenode is happy to be included as a list of recipients and honoured to be appreciated in this way.

  • Handshake is a FOSS project, and like many FOSS projects it has a channel on the freenode network.
  • Handshake is an experimental peer-to-peer DNS for which one aim is to be more resistant to censorhip than existing systems.
  • Handshake is doing a faucet distribution to a number of FOSS contributors and projects, many of which are freenode users.
  • Handshake is making a fairly hefty (USD 10.2 million) overall financial contribution to projects within the FOSS sphere in addition to its faucet allocation of HNS coins and freenode is one of many projects within the FOSS sphere that is receiving a contribution.

As such, any link between freenode and Handshake is tenuous at best and the current wave of spam would appear to be designed to do little bar discredit freenode and the Handshake project both.

I am sure you will appreciate that the freenode volunteer team is not in a position to answer questions relating to the Handshake project any more than they are in a position to answer questions relating to any other new FOSS project that starts to use the freenode network.

But I also understand that some of you may have additional questions relating to Handshake, I am sure you will appreciate that the freenode website is not the platform for such a discussion, and would suggest that you visit the Handshake website, Handshake Github Repository and Handshake Documentation if you are interested in learning more about the project and that you direct any questions to the Handshake project via the appropriate communication channels for the project.

by christel at August 06, 2018 06:30 AM

July 28, 2018

erry's blog

Installing and getting started with Python

I like experimenting with and learning new things. I’d never looked at Python before, because its syntax put me off, coming from a background of languages with C-like syntax. However, I eventually convinced myself to at least have a play with it and I’ve started working on a simple application that I can deploy to AWS.

Of course, the first step with any new tool is always getting it set up, and it can sometimes not be as straightforward as one would expect. I had a bit of trouble at first, so I thought I would share my experience for others that want a quick way of getting started.

Pyenv

The first thing I want to say is that I would strongly suggest using Pyenv. I always suggest version managers for programming languages, because not only do they allow you to have more than one version installed and be able to switch between them, but they also change the default paths for module installation to your user directory, meaning you don’t need sudo to install dependencies – a big advantage for me.

Dependencies

The first step to install pyenv is to install the dependnencies for building python. These vary on your operating system, but there is a guide over at the Pyenv github.

In my case, I use Ubuntu, so I had to run the following command:

sudo apt-get install -y make build-essential libssl-dev zlib1g-dev libbz2-dev \
libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev \
xz-utils tk-dev libffi-dev

Installation

Once that has been taken care of, it’s time to install Pyenv itself. Once again, follow instructions over at the github page for pyenv for your OS.

I used the automatic installer, which is the easiest way, but it also requires to install git in addition to the above dependencies.

After installing it, you need to add the following to your ~/.bash_profile or ~/.bashrc or equivalent and re-start your terminal session:

# Load pyenv automatically by adding
# the following to ~/.bash_profile:

export PATH="/home/errietta/.pyenv/bin:$PATH"
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"

If it’s worked, running pyenv should show you a help screen!

Install Python with pyenv

Now that you have pyenv working, you can easily install one or more versions of Python. Since I wanted to use it with serverless, I needed to get either 2.7.* or 3.6.*.

Run pyenv install 2.7.8 or pyenv install 3.6.6

If everything goes correctly, it should only take 5-10 minutes. If not, the output should say what the problem was – the common build problems page on github has some more information to fix problems, but generally it should just work if you have installed all the dependencies.

It should say “Installed Python-3.6.6 to /home/errietta/.pyenv/versions/3.6.6” when it is finished.

Hello world

Now you can make your first python code. First of all, inside the directory of your project, you should run pyenv local 3.6.6 (or whichever version you installed), so that pyenv knows which version of python to use for your project.

Now you can make your code file, say hello.py:

print("Hello world")

And to run it:

python hello.py

Congrats, it works!

Modules

Consider the following tree:

.
|-- hello.py
`-- util
    `-- math.py

And the following code in each file:

hello.py:

from util.math import add

print("Hello world")
print(add(2, 3))

util/math.py

def add(a,b):
  return a+b

This might work, but if you lint your code with pylint (or if your IDE does it for you – Hi VS code!), you’ll notice that it complains:

hello.py:1:0: E0611: No name 'math' in module 'util' (no-name-in-module)

What you need to do in this case, is create __init__.py with no content inside util. This tells python that your directory contains python modules. It can also execute initialization code, but in this case you can just leave it empty.

Now that should make pylint and/or your IDE happy :)

Next steps

by Errietta Kostala at July 28, 2018 12:05 PM

July 27, 2018

freenode staffblog

Current spambot attack on freenode (and elsewhere)

Many of you will have noticed that over the last few days there has been an extensive spambot wave on freenode, and on other networks.

The fairly aggressive spambot attacks link to websites that we believe to have been set up to impersonate freenode volunteers, and that we believe to contain offensive and incorrect information intended to defame and libel members of the freenode volunteer team.

Naturally, the matter has been escalated to law enforcement and both the project and the individual volunteers concerned have sought legal advice in connection with the current attack.

Due to the nature of the attack, this is of course causing serious emotional distress on the part of the affected volunteers and their immediate family and social circles, as well as the rest of the volunteer team.

On behalf of the entire team, I would like to express thanks to those of you who have reached out with words of encouragement and support, and especially those of you from other IRC networks who have invested your time and efforts in trying to help mitigate and support.

I would also like to apologise to those users and channels (on and off freenode) who are affected by the spam.

by christel at July 27, 2018 01:20 PM

July 13, 2018

freenode staffblog

freenode #live 2018: Welcoming (some of) this year's keynote speakers

It is with a great deal of excitement that I can announce some of this year's keynote speakers for freenode #live. The entire freenode team is excited to be welcoming the following FOSS rockstars to Bristol this November; Bradley Kuhn, Chris Lamb, Kyle Rankin, Leslie Hawthorn and VM Brasseur.

We have a few more exciting announcements to make in the lead-up to the conference! You don't want to miss out, and we encourage you to head over to https://freenode.live to get your tickets for this year's event! And if you want to join this year's speaker line-up then you still have some time, the CFP is open and we're looking forward to hearing from you.

Bradley M. Kuhn

Bradley M. Kuhn is the Distinguished Technologist at Software Freedom Conservancy, and editor-in-chief of copyleft.org. Kuhn began his work in the software freedom movement as a volunteer in 1992, as an early adopter of GNU/Linux, and contributor to various Free Software projects. Kuhn's non-profit career began in 2000 at FSF. As FSF's Executive Director from 2001-2005, Kuhn led FSF's GPL enforcement, launched its Associate Member program, and invented the Affero GPL. Kuhn was appointed President of Conservancy in April 2006, was Conservancy's primary volunteer from 2006-2010, and has been a full-time staffer since early 2011. Kuhn holds a summa cum laude B.S. in Computer Science from Loyola University in Maryland, and an M.S. in Computer Science from the University of Cincinnati. Kuhn received an O'Reilly Open Source Award, in recognition for his lifelong policy work on copyleft licensing. You can follow him on Twitter @bkuhn_ebb_org

Chris Lamb

Currently Project Leader of the Debian GNU/Linux project and a member of Board of Directors for the Open Source Initiative, Chris is a freelance computer programmer, author of dozens of free-software projects and contributor to 100s of others. He has been official Debian Developer since 2008 and is currently highly active in the Reproducible Builds sub-project for which he has been awarded a grant from the Linux Foundation's Core Infrastructure Initiative. In his spare time he is an avid classical musician and Ironman triathlete. Chris has spoken at numerous conferences including LinuxCon China, HKOSCon, linux.conf.au, DjangoCon Europe, LibrePlanet, OSCAL, All Things Open, SCALE, Software Freedom Kosovo, #freenode Live, FOSS'ASIA, and many more. You can follow him on Twitter @lolamby

Kyle Rankin

Kyle Rankin is the Chief Security Officer at Purism, SPC and a Tech Editor and columnist at Linux Journal. He is the author of Linux Hardening in Hostile Networks, DevOps Troubleshooting, The Official Ubuntu Server Book, Knoppix Hacks, Knoppix Pocket Reference, Linux Multimedia Hacks and Ubuntu Hacks, and also a contributor to a number of other O’Reilly books. Rankin speaks frequently on security and free and open source software including at BsidesLV, O’Reilly Security Conference, OSCON, SCALE, CactusCon, OpenWest, Linux World Expo and Penguicon. You can follow him on Twitter @kylerankin.

Leslie Hawthorn

An internationally known developer relations strategist and community management expert, Leslie Hawthorn has spent the past decade creating, cultivating, and enabling open source communities. She’s best known for creating the world’s first initiative to involve pre-university students in open source software development, launching Google’s #2 developer blog, and receiving an O’Reilly Open Source Award in 2010. Her career has provided her with the opportunity to develop, hone, and share open source expertise spanning enterprise to NGOs, including senior roles at Red Hat, Google, the Open Source Initiative, and Elastic.

If you cheer during movies when you hear the words “I fight for the users” or “Get your head out of your cockpit,” the two of you will likely get along famously. Follow her on Twitter @lhawthorn or read her blog at https://hawthornlandings.org/

VM Brasseur

VM (aka Vicky) spent most of her 20 years in the tech industry leading software development departments and teams, and providing technical management and leadership consulting for small and medium businesses. Now she leverages nearly 30 years of free and open source software experience and a strong business background to advise companies about free/open source, technology, community, business, and the intersections between them.

She is the author of Forge Your Future with Open Source, the first book to detail how to contribute to free and open source software projects. Think of it as the missing manual of open source contributions and community participation. The book is published by The Pragmatic Programmers and is now available in an early release beta version. It's available at https://fossforge.com.

Vicky is the Vice President of the Open Source Initiative, a moderator and author for opensource.com, an author for Linux Journal, and a frequent and popular speaker at free/open source conferences and events. She's the proud winner of the Perl White Camel Award (2014) and the O’Reilly Open Source Award (2016). She blogs about free/open source, business, and technical management at {anonymous => 'hash'};. You can follow her on Twitter @VMBrasseur

by christel at July 13, 2018 01:49 PM

June 29, 2018

freenode staffblog

freenode Security Update: Reused Password Attack

In the very early hours of today (Friday 29 June 2018), we became aware of unauthorised attempts to access a substantial number of freenode accounts. This appears to be the result of an attacker using lists of usernames and passwords from other online services that have previously been compromised, and trying these combinations on freenode accounts.

Our investigations commenced immediately and we found that the attacker had been able to log in to a number of freenode accounts.

freenode has not been hacked or compromised.

Affected information

For the affected accounts, usernames (nicknames) and passwords are affected. Additionally, for some accounts, other information including channel access and channel lists may be affected.

What we are doing

We are committed to protecting your data and, as a precaution, we have frozen the affected accounts and are in the process of sending individual notifications to affected users.

What you can do

If your account was affected, we are in the process of contacting you directly with information to reset your password and restore access to your account.

We encourage all users to practice good password hygiene, even if your account has not been affected at this time.

Attacks such as these have a tendency to escalate and cause a domino effect and we will continue to investigate and monitor for new attack vectors.

Password reuse means that once one account is compromised, all of the accounts that share that password become compromised.

by christel at June 29, 2018 08:08 AM

June 22, 2018

freenode staffblog

freenode and irc.com

In light of the two ongoing threads on Hacker News and Reddit concerning LTMH and irc.com, we have had a fair few freenode users contact us with questions as to whether irc.com will replace and/or absorb freenode, and what impact it would have on freenode communities.

I wanted to make sure that I addressed these concerns, and I can assure you that there are no plans on the part of freenode or LTM that involve any changes to freenode, freenode is not on the brink of shutdown — if anything, we are excited to be celebrating our 20th anniversary at this year's freenode #live event, and we hope to see at least another 20 after that.

The freenode project exists to support the development and use of Free and Open Source Software, and to that end it serves a very different purpose to the one that the visionaries behind irc.com have in mind. I fully believe that freenode and irc.com can co-exist, just as we co-exist with the numerous other IRC networks out there, and I would like to hope that irc.com may encourage those of their users who would be a good fit for freenode to come check us out, just as I hope that we may be able to send someone their way should we come across users who have great potential for running a series of training sessions or similar.

And while LTM has provided freenode with some much needed resources following last year's announcement, any potential partnership between the two will be limited to the possibility of freenode being represented in the irc.com foundation. It is my understanding that the foundation will operate on a nonprofit basis and will seek to bring together network operators and ircd/services developers to identify irc-related projects that are in need of funding and support, the irc.com team hopes to establish positive working relationships with operators, developers, ircv3 and end users alike, and the foundation, which will be governed by community consensus, will seek to ensure that irc.com and its efforts benefit all, not only those organisations that LTM supports today.

With regard to irc.com itself, I am curious and excited to see what's in store in terms of utilising IRC as a platform for delivering training sessions and the idea of a virtual incubator using an IRC backend. There has been some incredible developments on the client-side in recent times, with both IRCCloud and KiwiIRC continuing to work on features that will soon introduce video and voice calls, file-sharing and a host of other productivity tools that provide the irc.com team with good foundations for success.

For the sake of full disclosure: I am an Executive Vice President at LTM, and I work closely with all subsidiaries within the Group, irc.com included.

by christel at June 22, 2018 09:20 PM

June 16, 2018

erry's blog

Typescript and the Beanstalk

Typescript and the Beanstalk

Deploying typescript apps to Beanstalk with CircleCI

Before we get started, note that this post assumes that you have your CircleCI/Beanstalk integration working already. The reason for this is that setting that up itself is a very long-winded process. I may make a video about it some time, but in the mean time there is a very good medium post that explains how to set that up.

Also, a big special thanks to Rokt33r for the example repo that I shamelessly copied everything from :)

Why is this so different?

So, you may have seen examples of deploying node apps to Beanstalk and even got them working, but now you want to deploy a typescript app. You may be wondering how to ship your built files, since just shipping your source code isn’t enough to get it running, unlike plain node.js.

So the key thing to remember is that you deploy your built assets, not your source code. That’s actually true for even plain node apps, but it just so happens that in that case the build and the source is the same.

So, how do you do this? Well, assuming you’ve done the ~~impossible~~ hard bit of setting up AWS, Beanstalk, and CircleCI, the rest is actually quite easy.

Step 1: tsconfig.json

You should have that set up to compile your code. If you already have it, you don’t need to change it, just take a note of what outDir is as you’ll need it. If you don’t have it, or want to see an example, here is one.

{
    "compilerOptions": {
        "target": "es6",
        "module": "commonjs",
        "outDir": "dist",
        "sourceMap": true,
        "baseUrl": ".",
        "paths": {
            "*": [
                "node_modules/*",
                "src/types/*"
            ]
        }
    },
    "include": [
        "src/**/*"
    ],
    "exclude": [
        "node_modules"
    ]
}

Step 2: package.json

You should already have start and build scripts, but if not, here is an example:

 "scripts": {
    "build": "tsc",
    "start": "node dist/index.js"
  },

build is the command used to build your software; in this case it’s tsc to compile the typescript down to javascript.

start is how you start your software, obviously this depends on your app but it’s usually something like node dist/index.js. Note that you’re pointing to your compiled js code. The location may not be dist; but it’s whatever you have set as outDir in tsconfig.json.

Step 3: dist step

If you’ve seen the UI for Elastic Beanstalk for AWS, you basically deploy apps to it by uploading .zip files. If you use the command line (eb deploy), it does that for you. What we essentially want to do is to build your distribution .zip file to upload. How? Essentially by zipping your dist directory.

Make a file called scripts/dist.sh, and put the following code in:

# If the directory, `dist`, doesn't exist, create `dist`
stat dist || mkdir dist
# Archive artifacts
zip dist/$npm_package_name.zip -r dist package.json package-lock.json

If you’re not using dist as the directory name for your build (remember the outDir from earlier), you need to change it above.

This will basically build your zip for you!
You can test it out manually:

[email protected] [4] (git)-[master] ~/hyperbudget-backend % bash scripts/dist.sh
  File: 'dist'
  Size: 4096            Blocks: 8          IO Block: 4096   directory
Device: 801h/2049d      Inode: 16393011    Links: 6
Access: (0775/drwxrwxr-x)  Uid: ( 1000/errietta)   Gid: ( 1000/errietta)
Access: 2018-06-16 11:41:27.842696048 +0100
Modify: 2018-05-24 19:59:55.741014000 +0100
Change: 2018-05-24 19:59:55.741014000 +0100
 Birth: -
  adding: dist/ (stored 0%)
  adding: dist/app.js (deflated 70%)
  ....
  adding: dist/index.js (deflated 39%)
  adding: dist/index.js.map (deflated 58%)
  adding: package.json (deflated 65%)
  adding: package-lock.json (deflated 77%)
[email protected] [4] (git)-[master] ~/hyperbudget-backend %

This will actually generate dist/.zip – that’s because when you run it manually, $npm_package_name is not set. Feel free to look at that zip file and verify that all your built javascript is in there. If it’s correct, delete the file, we’ll tell npm to generate it instead in the next step!

Step 4: Set up the npm dist script

Really simple, just add another script to your package.json. For example:

  "scripts": {
    "build": "tsc",
    "dist": "sh ./scripts/dist.sh",
    "start": "node dist/index.js"
  },

Now you can run your dist step by doing npm run dist! If you run that, you will now have a zip file in your dist folder that is actually named after your app, and you can open it and again verify that you have all your compiled javascript there.

Step 5: Ammend CircleCI configuration

Now we need to tell CircleCI to run npm run build and npm run dist. This will depend on your configuration, but basically edit .circleci/config.yml and make sure that npm run build and npm run dist is ran before eb deploy. For example, I have the following:

      - deploy:
          name: Deploy to Elastic Beanstalk
          command: |
            npm install && npm run build && npm run dist && eb deploy --staged MyApp-env

This will install all my dependencies, compile my app down to javascript, run npm run dist to generate the zip file, and deploy to Beanstalk. Hang on, though, how do we tell Beanstalk to use that zip file?

Step 6: Beanstalk configuration

Create or ammend your .elasticbeanstalk/config.yml file. In there, you need to add the following:

deploy:
  artifact: dist/YOUR-APP-NAME.zip

The name of the zip should be the same as your npm package name if you have set up the dist.sh script to use $npm_package_name. You can always manually run npm run dist and then see the name of the file it generates.

If you don’t have an elasticbeanstalk/config.yml file, here is a simple one that works for me:

branch-defaults:
  master:
    environment: MyApp-env
  staging:
    environment: MyApp-env
  dev:
    environment: MyApp-env
global:
  application_name: my-app
  default_platform: 64bit Amazon Linux 2017.03 v4.5 running Node 8.10
  default_region: eu-central-1
deploy:
  artifact: dist/my-app.zip

You should change your branch names and environnment names to match your deployed branches and your Beanstalk environment names respectively, and you can change the region and platform if you ned to. The important thing is for the artifact to be correct, this will tell beanstalk to deploy your zip file

That’s it!

If you set up all this correctly, circleci should now be deploying your zip file to beanstalk. You can check the CircleCI build information and verify that it runs tsc, zips up your files, and deploys to Beanstalk. Here’s what mine looks like for comparison:

#!/bin/bash -eo pipefail
npm install && npm run build && npm run dist && eb deploy --staged MyApp-env


> [email protected] install /home/circleci/app/node_modules/ursa
> node-gyp rebuild

make: Entering directory '/home/circleci/app/node_modules/ursa/build'
  CXX(target) Release/obj.target/ursaNative/src/ursaNative.o
  SOLINK_MODULE(target) Release/obj.target/ursaNative.node
  COPY Release/ursaNative.node
make: Leaving directory '/home/circleci/app/node_modules/ursa/build'
added 232 packages in 11.433s

> [email protected] build /home/circleci/app
> tsc


> [email protected] dist /home/circleci/app
> sh ./scripts/dist.sh

  File: ‘dist’
  Size: 4096        Blocks: 8          IO Block: 4096   directory
Device: 100016h/1048598d    Inode: 7272        Links: 4
Access: (0755/drwxr-xr-x)  Uid: ( 3434/circleci)   Gid: ( 3434/circleci)
Access: 2018-06-14 13:37:17.348967660 +0000
Modify: 2018-06-14 13:37:17.376967104 +0000
Change: 2018-06-14 13:37:17.376967104 +0000
 Birth: -
  adding: dist/ (stored 0%)
  adding: dist/App.js (deflated 51%)
  adding: dist/index.js (deflated 39%)
  adding: dist/script/ (stored 0%)
  adding: dist/App.js.map (deflated 64%)
  adding: package.json (deflated 56%)
  adding: package-lock.json (deflated 77%)
Uploading my-app/app-******.zip to S3. This may take a while.
Upload Complete.

INFO: Environment update is starting.
INFO: Deploying new version to instance(s).
INFO: New application version was deployed to running EC2 instances.
INFO: Environment update completed successfully.

You can see that it ran npm install, tsc, and then my ./scripts/dist.sh which zipped all my built files, and then successfully deployed it to EC2.

I hope this helps someone else, thanks for reading :)

by Errietta Kostala at June 16, 2018 03:01 PM

June 06, 2018

freenode staffblog

Announcement: jobs.freenode.net - New Service

Over the last two decades, we have found that a variety of freenode community members have reached out to us when they have been involved in the hiring process at their places of work. We have always been keen to support and promote relevant topics within the wider freenode communities, and we are excited to be launching the new jobs.freenode.net website. Whether you are hiring for a permanent full-time role, looking to fill a temporary contract or looking to attract volunteer contributors for your FOSS project, we very much welcome and encourage you to use the site.

We hope that the new site will provide a useful addition to the existing freenode projects, such as the IRC network and the #live conference.

Hiring?

Head over to jobs.freenode.net and add your job openings! The service is free to use, although we would be grateful for a contribution towards the operating costs of freenode services and the #live conference in the event that you successfully match via the website. In the event that you successfully match and wish to make a contribution, please contact us on [email protected]

FOSS or other peer-directed project on the hunt for volunteers?

Why not add a post on the jobs.freenode.net site to see if you may be able to attract some contributors from the wider freenode and FOSS communities?

Looking for a new job or a volunteer role?

Keep an eye on jobs.freenode.net to see if something of interest is added. We will utilise wallops on the freenode IRC network to provide a brief summary of available roles periodically, for those wishing to receive these, please set yourself +w (/umode +w or /mode yournick +w).

Feature requests, suggestions and feedback?

The github repository can be found here, you can also drop us an e-mail to [email protected] or find us in #freenode-jobs on the freenode network.

by christel at June 06, 2018 09:24 AM

May 26, 2018

freenode staffblog

Services maintenance and password security

We recently took our services (NickServ and friends) offline for maintenance to ensure encrypted storage of the services database.

During this process, we accidentally started services with an empty database. While we quickly realized the mistake, a large number of users were already logged out before we stopped the process, receiving a message like "Account youruser dropped, forcing logout". Services were quickly restored to normal afterwards and people were able to log in to their accounts as before. We would like to apologize for the disruption and confusion this may have caused.

Unfortunately, some people have used this opportunity to spread some misinformation, claiming that "all passwords have been released". This is not the case; there has been no threat to account security due to this incident. Additionally, we do not store passwords in a recoverable form at all.

In any case, we do recommend using a unique and secure password not shared with other online services. If you wish to change your password, you may do so using the command /msg NickServ SET PASSWORD <newpassword> while logged in (replacing <newpassword> with the password you wish to set). You might wish to consider using a password manager as well, such as KeePassXC.

We do take security and privacy very seriously. Notifications about any actual security breaches would appear on this site, as well as in global notices sent out by members of staff (identified by a freenode/staff/ cloak).

Apologies for the confusion and thank you for using freenode!

by ilbelkyr at May 26, 2018 05:12 PM

May 24, 2018

freenode staffblog

Updated Privacy Policy

With GDPR coming into effect tomorrow, 25 May 2018, freenode has made some amendments to its privacy policy to provide clarification relating to GDPR compliance.

In the event that you do not consent to our continued processing of your personal data in order to provide you with access to the service, you may drop your nickserv registration using the drop command (please see '/msg nickserv help drop' for further instructions). The latest version of our policies can always be found here.

by christel at May 24, 2018 11:58 PM

May 14, 2018

freenode staffblog

Channel moderation and channel topics

On freenode, we have always tried to minimise the amount of policies we apply across the network to allow projects to run their project channels in ways that complement their wider procedures and code of conducts for the projects both on and outside of IRC.

As such, a number of project channels opt to run their channels in a way that allows any user of the channel to modify the topic, and for most this is an approach that works most of the time, and ensures that updates can be announced and communicated effectively without all community members needing to be on the access list for the channel in question.

Naturally, the trade-off is that also those outside of the community are able to join and modify topics at will, and we are currently finding that a number of a project channels are having their topics changed to a message encouraging the users of the channel to move to a different channel.

In light of the above, we would like to ask that you check the modes and topics of your channel(s), and if appropriate reinstate your previous topic and decide whether or not you may wish to +t, even temporarily, to reduce disruption within your community.

Please do not hesitate to message a member of freenode staff for assistance!

by christel at May 14, 2018 07:26 AM

May 13, 2018

erry's blog

How to fix your node dependencies’ es6 causing browser errors

How to fix your node dependencies’ es6 causing browser errors

If you’re doing anything with modern JavaScript this day and age you’re probably using es6 and using babel to transpile it back to es5, which works with most browsers.

This works fine for the code you write, but what about your dependencies? Usually they themselves provide transpiled code so it’s not something to worry about. However, some of them may not (especially if their primary focus wasn’t to be used on the browser and, well… you may get the errors like the below in older browsers (such as IE11):

   
    SCRIPT1014: Invalid character
    SCRIPT5009: 'webpackJsonp' is undefined

A screenshot of the IE11 error screen showing ‘Invalid character’ and ‘webpackJsonp’ is undefined

A screenshot of the IE11 error screen showing ‘Invalid character’ and ‘webpackJsonp’ is undefined

Your first instinct may be to leave the industry and go herd baby goats.

A baby pygmy goat looking out of a cave at Mudchute park & farm

A baby pygmy goat looking out of a cave at Mudchute park & farm

However, there is an alternative. Or at least, I found a solution on github after much hair pulling googling:

First of all, you unfortunately need to figure out which module is causing the problem. Thankfully, your browser should show the code where the error occurs. In this case, this module was using es6 template string, which IE11 does not support:

A screenshot of the IE11 debug console showing an error occuring because the module in question was using es6 template strings If you’re bundling your dependencies, it’s not immediately obvious where the error is. In this case, I got lucky, and easily found the culprit with a bit of scrolling up; it turned out to be csv-parse (and yep, my evil experiments have me parsing CSV files in the browser…)

A screenshot of the debug console showing part of csv-parse’s code where you can clearly see its name in a comment

Once you find out what the problem is, what you have to do is essentially tell babel to transpile that module as well, since you would normally be excluding all of node_modules.

First of all if you have exclude: /node_modules in your webpack.config.json, you have to get rid of that.

Instead, use include: ['src'] (or whatever your source directory is).

Now you have to add the problematic module as well. In this case, this is what my config looks now (replace csv-parse with the name of the module that is the problem):

 

  
  {
      test: /\.js$/,
      include: ['src', require.resolve('csv-parse') ],
      use: {
        loader: 'babel-loader',
        options: {
        }
      }
    },

Also, make sure that you are using babel-polyfill to get es5 polyfills for all the es6 code.

For example, I changed my entry to this:

        entry: [ 'babel-polyfill', './src/client/index.ts' ],

That’s it. Once you’ve done those things, after you re-compile, all the errors should go away and you should have plain old es5 code compiled from your es6 dependencies!

Here is the full diff of what I changed for comparison:

    diff --git a/.babelrc b/.babelrc
    index 76a26b7..244d28b 100644
    --- a/.babelrc
    +++ b/.babelrc
    @@ -1,4 +1,3 @@
     {
    -    "presets": [ "es2015" ],
    -    "plugins": ["transform-runtime"]
    +    "presets": [ "es2015" ]
     }
    diff --git a/webpack.config.js b/webpack.config.js
    index 3796b22..3112198 100644
    --- a/webpack.config.js
    +++ b/webpack.config.js
    @@ -6,7 +6,7 @@ const env = process.env.NODE_ENV
     const UglifyJSPlugin = require('uglifyjs-webpack-plugin')

     module.exports = {
    -  entry: [ './src/client/index.ts' ],
    +  entry: [ 'babel-polyfill', './src/client/index.ts' ],
       output: {
         filename: 'dist/public/bundle.js'
       },
    @@ -21,11 +21,10 @@ module.exports = {
           },
           {
             test: /\.js$/,
    -        exclude: /(node_modules|bower_components)/,
    +        include: ['src', require.resolve('csv-parse') ],
             use: {
               loader: 'babel-loader',
               options: {
    -            presets: ['[@babel/preset-env](http://twitter.com/babel/preset-env)']
               }
             }
           },

Huge thanks to KagamiChan and johnwebbcole on github who found this issue and published a solution, I am merely reporting in case it helps someone else, and not trying to take credit :)

by Errietta Kostala at May 13, 2018 10:30 PM

February 26, 2018

freenode staffblog

freenode #live 2018 - Call for Proposals now open!

black belt

Jorge Oliviera from JOG, 6x South Brazil National Champion.

You do not need to have a black belt in FOSS to come talk at this year's freenode #live conference

freenode #live returns to We The Curious in Bristol, UK on Saturday 3 and Sunday 4 November 2018. The CFP is now live, and you can submit a talk over at the freenode.live website.

The inaugural freenode #live conference last year saw a star-studded speaker line-up including Deb Nicholson, Matthew Garrett, Karen Sandler, John Sullivan, Jelle van der Waa, Chris Lamb, Neil McGovern, Matthew Miller and many, many more.

Matt Parker from Standup Maths and Festival of the Spoken Nerd provided excellent entertainment on the Saturday evening, and the feedback from attendees, speakers and volunteers alike was overwhelmingly positive.

freenode #live 2017 was possible thanks to the generous support of sponsors such as Bytemark, Falanx, openSUSE, Private Internet Access, Ubuntu and Yubico. Private Internet Access has already agreed to sponsor the event for another year, and we are currently looking for additional sponsors. Please do not hesitate to get in touch if your company might be interested in supporting freenode #live 2018.

We are looking forward to hosting this year's freenode #live conference, and hope that you will join us there.

by christel at February 26, 2018 08:50 PM

February 05, 2018

freenode staffblog

Celebrating FOSS, and twenty years of Open Source

For the last few days I (and several of the freenode volunteers) have had the absolute pleasure of spending time with a wide range of freenode users over at FOSDEM in Brussels. FOSDEM has always provided us with an excellent opportunity to catch up, not only with one another but also with sponsors, group contacts and others.

I would like to extent heartfelt thanks to the incredible organisers and volunteers, speakers and attendees who make FOSDEM (and other such events) possible, and I would also like to thank those of you who took the time to speak with us, provide feedback, thoughts and words of appreciation. It is nice to be reminded that you appreciate the freenode project, and that you feel it adds some value.

We often find that a large proportion of our time is spent dealing with spam or other problematic behaviour, and it is all too easy to forget that the incredibly small minority of users that create issues are just that, a minority, and that the vast proportion of our userbase consists of amazing human-beings who collaborate on exciting, important and curious projects. And perhaps we also sometimes forget to show our appreciation of the incredible work you all undertake within the FOSS and peer-directed project spaces.

On Saturday, I joined Laura Czajkowski, Leslie Hawthorn, Deb Nicholson, VM Brasseur and many others in song as we came together to sing Happy Birthday to Open Source to mark that it had been 20 years since the term was first coined. It felt fitting that this should take place not only during FOSDEM, but also during Free and Open Source Software Month. So once more, Happy Birthday Open Source!

To celebrate Free and Open Source Software Month, Private Internet Access is running a promotion this month, with savings of up to 62% if you take out an annual subscription here.

We'd love to hear about, and help highlight any other similar promotions run by other companies that are doing something similar to celebrate FOSS month! Please do let us know ([email protected]) or via IRC if you are doing something cool, and would like us to share it with our community!

by christel at February 05, 2018 11:19 PM

December 23, 2017

erry's blog

5 Helpful Linux Shell Tricks You May Not Know About

These are just 5 helpful Linux tricks I’ve picked up in my career and thought would be nice to share, in case there are others that don’t know about them!

  • Did you know that you can use cd without any arguments to go back to your home directory?
  • You can use cd - to go back to the previous directory you were in! If you do it again after the first time, you can switch back and forth.
  • On a similar note, there is actually a directory stack. You can use pushd instead of cd to go to another directory and add it on to your stack. You can see your stack with dirs and use popd to take the topmost directory off the stack and go back to the directory before it.
  • You can use history to see your command history. It should give you a list of numbers and corresponding commands. Then you can use !(the number) to repeat a command. For example, in the following scenario:
    $ history
    101 ls
    102 ssh [email protected]
    103 cd ~/Downloads
    

    In this scenario, you’d use !103 to repeat the command cd ~/Downloads.

  • You can use CTRL-R to search backwards in your command history. I’ve found this incredibly helpful! You could, for example, press CTRL-R followed by the word git to find the previous git push command in your history instead of typing the whole command in again. Save seconds from your day!

I’ve also made a video if you want to see those tricks in action! Enjoy!

by Errietta Kostala at December 23, 2017 05:48 PM

December 22, 2017

erry's blog

Getting started with express and typescript

Getting started with express and typescript

I recently started an ExpressJS project, and I wanted to use Typescript, as I thought my project would benefit from the typed language and stricter structure.

I had a bit of trouble setting up when following other tutorials, so of course the right thing to do was to write my own.

This is a continuation of my previous tutorial, so I’ll assume you at least have node installd – if not follow my guide to getting started with nodejs first.

It also assumes you have some basic knowledge of express and typescript already, so it’s not about these components but rather about putting them together.

You want to start a new project, and install express as usual:

npm init
npm install --save express hbs
npm install --save-dev @types/express @types/node

The latter command will install typescript type definitions for express and node

Now you can write your typescript classes and use express. This is a basic hello world page, src/app.ts:

import * as express from "express";
import * as path from "path";

class App {
    public express;

    constructor() {
      this.express = express();
      this.mountHomeRoute();
      this.prepareStatic();
      this.setViewEngine();
    }

    // This serves everything in `static` as static files
    private prepareStatic(): void {
     this.express.use(express.static(path.join(__dirname, "/../static/")));
    }

    // Sets up handlebars as a view engine
    private setViewEngine(): void {
      this.express.set("view engine", "hbs");
      this.express.set("views", path.join(__dirname, "/../src/views"));
    }

    // Prepare the / route to show a hello world page
    private mountHomeRoute(): void {
      const router = express.Router();
      router.get("/", (req, res) => {
          res.json({
              message: "Hello World!"
          });
      });
            this.express.use(‘/’, router)
    }
}

export default new App().express;

You don’t need to do everything that I’m doing here, you could only keep the call to mountHomeRoute() and you’d still get your hello world app.

You can also see that you can still use express features like the router and views the same way as you would with plain javascript!

Once you’ve written your class to set up the express app, all you need is server.ts to start the server.

import app from "./app";

const port = process.env.PORT || 3000;

app.listen(port, (err) => {
  if (err) {
      return console.log(err);
  }

  return console.log(`server is listening on ${port}`);
});

Now that you have your typescript written, you can compile it into plain javscript.

First of all, you need typescript if you haven’t got it yet:

npm install -g typescript

This should give you the tsc command. Now, to compile all of your files under src, you can do the following:

tsc --outDir dist src/**/*

This tells the typescript compiler to compile all the files inside src/ to the dis directory. If you look at dist, it should have your generated files:

$ ls dist
app.js  server.js

Now to start your app you need to run the server.js file, which is the compiled javascript (not server.ts):

node dist/server.js

And if you navigate to http://localhost:3000/, you should see your app!

If you run your app using nodemon, it will automatically be restarted every time you re-compile, so all you need to do is re-run tsc.

Configuring the compiler

If you don’t want to have to run tsc --outDir dist src/**/* you can configure the compiler so you don’t have to give the options every time.

All you have to do is create tsconfig.json with the options you want:

{
    "compilerOptions": {
        "target": "es6",
        "module": "commonjs",
        "outDir": "dist",
        "sourceMap": true,
        "baseUrl": ".",
        "paths": {
            "*": [
                "node_modules/*",
                "src/types/*"
            ]
        }
    },
    "include": [
        "src/**/*"
    ],
    "exclude": [
        "node_modules"
    ]
}

The outDir option should be familiar from default; by setting this to dist, tsc will automatically put all your generated files in the dist directory.

include tells the compiler where to look for typescript files, in this case src/**/*.

"targer": "es6" is also important if you want es6 support.

If you want to see the full documentation for tsconfig.json, you can find it in the typescript hadnbook.

Once you have tsconfig.json ready, you can just use tsc to compile your code without having to manually specify options.

Next steps

From here on you should be ready to go! For more information, check out the following:

by Errietta Kostala at December 22, 2017 10:12 PM

December 08, 2017

erry's blog

Getting started with nodejs, nvm, npm

Getting started with NodeJS, nvm, npm.

I’ve recently been playing with NodeJS, and wanted to share my findings. This is just a quick ‘hello world’ tutorial on getting started. I (as of writing) code perl for a living, so you should definitely take my word on nodejs ;). Ok, enough joking around, let’s get to it!

nvm

First of all, I can thoroughly recommend using the node version manager, nvm – even if you are only working on a single, small project. nvm manages different versions of nodejs installations on your system. While you don’t have to use it and can use pre-compiled binaries (or packages) or compile node yourself, my personal advice is to use it. The reason for this is that nodejs moves very quickly, and you may sooner or later run into having to run two applications that support different major versions of nodejs. While, in an ideal world, everyone would upgrade their dependencies, reality is often different. By using nvm, you can easily install and manage different versions of node in your system, potentially saving yourself headaches from these problems.

You can install nvm with a single command, curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.6/install.sh | bash. If you’re alergic to magic curl | sh scripts, you can also perform a manual installation

nodejs

With nvm

If you’ve got nvm, nodejs is really easy to install! just run nvm install --lts to retrieve the latest long term support version.

Without nvm

(Skip this if you used nvm)

If you can’t or don’t want to install nvm, you can install nodejs from nodejs.org.

npm

npm is the node package manager, and you get it automatically when you install nodejs. It’s how dependencies are managed in the node.js world – if your software depends on a third-party library (such express) you will usually find a package that you can install from the npm registry. This gives an easy and standard way of managing third-party dependencies.

Dependencies

Your app’s dependencies are kept in package.json. You can tell npm to install and automatically save something to this file. For example:

npm install --save express

Will install express and save it as a dependency in package.json. Now, every time someone downloads your software, they can automatically install all of your dependencies by just doing npm install.

Hello world

Now that you finally have everything you need, you can start writing code!

For example, here’s a very simple hello-world.js:

console.log("Hello world!"); // Very simple!

You can run this with node hello-world.js

Using modules

Remember npm from earlier? You may be wondering: “Where do the modules I install go? How do I use them”?

npm installs modules in the ./node_modules subdirectory of your project root.

To use a module installed by npm you can just require it – for example const express = require("express") will load express. You don’t need to tell node where to look – it already knows to look in ./node_modules.

Express

Now, this is a simple express app, let’s call it hello-express.js

// Load express
const express = require('express');
const app = express();

// Start listening on port 3000
app.listen(3000, () => console.log('App started'));

// Set up your routes.
app.get('/', (req, res) => res.send('Hello World!'));

app.get('/page', (req, res) => res.send('Second page'));

You can run your app like the previous one: node hello-express.js

You can now visit your server at http://localhost:3000/. Magic!

[email protected] [2]  ~/hello % curl http://localhost:3000 
Hello World!

[email protected] [2]  ~/hello % curl http://localhost:3000/page
Second page

A bit more express

You can also define your routes in separate files – this makes things cleaner. For example, consider routes/account.js

const express = require('express');
const router = express.Router();

router.get('/', function (req, res) {
    res.send({ users: [1, 2, 3] });
});

router.get('/:user_id', function(req, res) {
    const user_id = req.params.user_id;

    const users = {
        1: { name: 'John', surname: 'Doe'  },
        2: { name: 'Jane', surname: 'Doe'  },
        3: { name: 'Best', surname: 'User' },
    };
   
    if (!users[user_id]) {
       res.json({ error: `User id ${user_id} not found` });
    } else {
       res.json(users[user_id]);
    }
});

module.exports = router;

Now in your main file, add:

const accountRoute = require('./routes/account');
app.use('/account', accountRoute);

This will allow your app to serve /account and /account/(user id)

$ curl http://localhost:3000/account/  
{"users":[1,2,3]}
curl http://localhost:3000/account/2 
{"name":"Jane","surname":"Doe"}

Next steps

Now that you know the basics, what’s next?

I also want to make a tutorial for a simple Typescript + express app, since I’ve been working on that, so watch this space :)

Until next time!

by Errietta Kostala at December 08, 2017 06:55 PM

December 05, 2017

freenode staffblog

"Joe-Job" spam on other networks referring to freenode

It has come to our attention that someone is going around other IRC networks, spamming channels with racist messages which suggest that they are promoting a channel here on freenode, and that we are aware and supportive of such.

We are monitoring the situation, but there's little we can do when they're targetting other networks, but we'd like to clarify that we of course do not support any racist or hate-inciting behaviour, which is strictly against our terms of use, and that the channel(s) referenced in the spam messages are in no way connected to the spam - rather they are innocent victims of a "Joe Job" designed to disrupt them as well as freenode.

Since there's little we can do to stop it on other networks, reporting each individual sighting, particularly in #freenode, is of limited value and likely to cause more noise than signal, but feel free to message a staffer (use the /stats p command to see who's available) if you have concerns, or particularly if you are part of the oper/staff team on another affected network and want to talk to us.

by bigpresh at December 05, 2017 09:00 AM

November 15, 2017

freenode staffblog

On Shadowbans

It's recently come to our attention that an unintended effect of combining channel modes allowed channel operators to set undetectable bans and quiets on users.

freenode considers this to be, in short, antithetical to our values and approach to moderation. While we recognise the challenges of moderating large channels, we urge channel operators to be as transparent as possible, and believe that users should be aware of moderation action taken against them.

In light of this, we are deploying a change to the $j extban. As of tomorrow UTC, $j will ignore ban exceptions set on the target channel, seeing the same effects as an uninvolved user checking its ban list.

This was always the intention in enabling $j. I apologise on behalf of the staff team for any confusion this may have caused.

by edk at November 15, 2017 11:59 PM

November 03, 2017

erry's blog

Perl Dependency Management

I did a talk on Perl Dependency management and using Carton at the London.pm technical meeting in Steptember.

This is the talk:

You can see my slides here.

by Errietta Kostala at November 03, 2017 01:04 PM

August 25, 2017

freenode staffblog

freenode #live - even more confirmed speakers

Yesterday, we announced more confirmed speakers for the freenode #live conference taking place at the At-Bristol Science Centre in Bristol, UK on 28-29th October this year.

Today we are happy to announce more speakers:

  • Chris Lamb - Currently the Debian Project Leader, Chris is a freelance computer programmer, author of dozens of free projects, and contributor to hundreds of others. Chris has spoken at numerous conferences, including LinuxCon China, HKOSCon, linux.conf.au, DjangoCon Europe, OSCAL, multiple DebConfs, Software Freedom Kosovo, foss-north & FOSS'ASIA.
  • Philipp Krenn - part of the infrastructure team and a Developer Advocate at Elastic, spreading the love and knowledge of full-text search, analytics, and real-time data. He is a frequent speaker at conferences and meetups about all things search & analytics, databases, cloud computing, and devops.
  • Oliver Gorwits - Oliver has a background in computer networks and is a senior IT manager at a major weather forecasting centre in the UK. For over 20 years he's worked with software as a hobby and contributed to open source, mainly in Perl, and now leads the Netdisco project.

Still more to come

With still more speakers to be announced soon, keep your eyes out for more announcements coming soon - and get your ticket now to secure your place!

Get your tickets now!

Want to watch these talented speakers? Get your tickets now to ensure you have the chance to experience these and the other speakers and workshops to be announced soon!

Exhibit your project or sponsor the event

If you represent a FOSS project and would like to exhibit, please contact us - FOSS projects exhibit for free, and it's a great way to meet your current users and attract others!

Corporate sponsors are very much welcomed also, with a variety of sponsorship packages available providing different exposure levels to a FOSS-centered, technical audience - get a warm fuzzy feeling by supporting the community which most likely contributed in no small way to the success of your business!

For any questions, please feel free to email us - [email protected] - or join us in #live on freenode!

We look forward to seeing you there.

by bigpresh at August 25, 2017 11:15 PM

August 23, 2017

freenode staffblog

freenode #live - more confirmed speakers

The freenode #live team are excited to announce more confirmed speakers for the freenode #live conference taking place at the At-Bristol Science Centre in Bristol, UK on 28-29th October this year, with plenty more still to be announced.

Of the variety of speakers, talks and workshops we've had submitted, we're pleased to announce the following confirmed lineup:

Keynote speakers

  • Deb Nicholson (Community Outreach Director for the Open Invention Network, winner of several awards including the O'Reilly Open Source Award)
  • Karen Sandler (Executive Director of the Software Freedom Conservancy, former executive director of the GNOME Foundation)
  • Matthew Garrett (technologist, programmer, and free software activist - a major contributor to various projects including Linux, GNOME, Debian, Ubuntu and Red Hat. He is a recipient of the Free Software Award from the Free Software Foundation for his work on Secure Boot, UEFI, and the Linux kernel.

Confirmed talks

  • James Wheare (founder of IRCCloud)
  • Christopher Baines (Debian packager, OSM contributor)
  • Kaspar Emanuel (freelance electronic design engineer and software developer working on projects ranging from musical instruments to robots to Braille displays)
  • Nathan Handler (freenode staff member, Ubuntu and Debian GNU/Linux Developer, Site Reliability Engineer at Yelp)
  • Errietta Kostala (Perl developer at FairFX London, keen open source contributor including Mozilla)
  • Maxigas (postdoctoral researcher at the Universitat Oberta de Catalunya and a fellow at Central European University)
  • Michael Walker (working on a Ph.D, open source contributor, started Arch Hurd distribution, author of Déjà Fu Haskell concurrent testing library)

More to come...

Another 12+ talks yet to be confirmed will be announced soon, plus workshops on various topics.

Get your tickets now!

Want to watch these talks and/or take part in these workshops? Get your tickets now to ensure you have the chance to experience these and the other speakers and workshops to be announced soon!

Exhibit your project or sponsor the event

If you represent a FOSS project and would like to exhibit, please contact us - FOSS projects exhibit for free, and it's a great way to meet your current users and attract others!

Corporate sponsors are very much welcomed also, with a variety of sponsorship packages available providing different exposure levels to a FOSS-centered, technical audience - get a warm fuzzy feeling by supporting the community which most likely contributed in no small way to the success of your business!

For any questions, please feel free to email us - [email protected] - or join us in #live on freenode!

We look forward to seeing you there.

by bigpresh at August 23, 2017 10:00 PM

August 17, 2017

freenode staffblog

Spambot attack

Earlier this morning, the freenode network was hit by a fairly extensive spambot attack, the spambots were distributing links to images that users have reported as containing child pornography images. Naturally, we are escalating the attack to law enforcement, but we would strongly encourage users to be vigilant and careful not to open links from users you do not know.

While the attacks are ongoing we have chosen to update the default umodes for users to include +R, please note that this means you will not receive messages from unregistered users and you will need to /mode yournick -R in order to allow those to come through. If you choose to set yourself -R, please be cautious of clicking on any links from unregistered users that you do not know.

At the height of the attack, one of the klines set resulted in a utility bot attempting to ban all users connected to the network, I can only apologise for this and we are looking into what happened.

Again, apologies for the disruption and please be cautious.

by christel at August 17, 2017 10:05 AM

August 10, 2017

erry's blog

Mojo VS Catalyst Lightning talk

I did a lightning talk at The Perl Conference in Amsterdam, Mojo VS Catalyst. Feel free to download the slides .

by Errietta Kostala at August 10, 2017 06:31 PM

July 24, 2017

freenode staffblog

freenode #live - Opening Keynote

We are delighted to announce the first of our keynotes for the freenode #live conference taking place at the At-Bristol Science Centre in Bristol, UK on 28-29th October this year. We have more keynotes to announce and these will be announced over the next couple of weeks. We also hope that you will join our brilliant line-up of speakers, and that you are considering submitting a talk.

The opening keynote is a thoroughly inspiring woman with extensive experience of the various aspects of the free software communities, and we are really excited to welcome none other than the glorious Deb Nicholson to Bristol in October.

Deb Nicholson Bio Picture

Deb Nicholson is a free software policy nerd and passionate community advocate. She is the Community Outreach Director for the Open Invention Network, the largest patent non-aggression community in history which serves Linux, GNU, Android and other key FOSS projects. She’s won the O’Reilly Open Source Award, one of the most recognized awards in the FOSS world, for her work on GNU MediaGoblin and OpenHatch. She is a founding organizer of the Seattle GNU/Linux Conference, an annual event dedicated to surfacing new voices and welcoming new people to the free software community. She also serves on the Software Freedom Conservancy's Evaluation Committee, which acts as a curator of new member projects. She lives with her husband and her lucky black cat in Cambridge, Massachusetts.

Get your tickets now to lock in to the early bird price and ensure that you have the chance to listen to Deb, and the other keynotes and speakers at freenode #live this October.

P.S. freenode will be at DEFCON25 this weekend (27-30th July 2017), do come find us in the vendor area and say hi, grab some stickers or get your hands on our limited edition freenode t-shirts!

by christel at July 24, 2017 10:52 PM

freenode at DEFCON25

DEFCON25 takes place at Caesar's Palace in Las Vegas on 27-30 of this month. freenode will be there—will you?

This year, freenode will have a booth in the vendor village—come have a chat with us, grab some stickers or get your hands on a limited edition freenode t-shirt!

freenode #live - October 28-29th 2017

Only a few more days to lock in to the special early bird ticket price! Get your tickets now, and don't forget to make a CFP submission if you fancy giving a talk.

Stay tuned, we're excited to be announcing the keynote speakers in the next few days!

by christel at July 24, 2017 03:04 PM

July 18, 2017

freenode staffblog

Exhibiting at freenode #live

You might have seen our latest blog post, where we announced the registration and call for participation for the freenode #live conference that takes place at the At-Bristol Science Centre in Bristol, UK on October 28-29th this year.

The freenode network plays host to a variety of free and open source software and other peer-directed projects. We would love to see your project come exhibit at freenode #live and we invite you to e-mail us at [email protected] if you are interested in coming along to showcase your project and meet like-minded FOSS enthusiasts.

We aim to ensure that the freenode #live conference remains affordable and accessible to the entire community, and to that end there is no exhibition charge for nonprofits and unincorporated FOSS projects.

While we are taking a grassroots approach with our conference, this does not mean that there is no space for corporate exhibitors and we recognise that the conference may provide opportunities for you too, and we would also like to welcome you to drop us a line to find out about our corporate exhibition packages or to tailor one specific to your company. Prices remain low also for corporate entities and we would love to see both types of exhibitor in Bristol on October 28 and 29th.

You can register and submit a response to our CFP here and we hope to start announcing keynote speakers and exhibitors soon. Please do keep an eye on the website though, we will also be opening up for nominations shortly as we will be presenting two awards as part of freenode #live, including the Rob Levin Memorial Community Award! Perhaps you know someone who has made great contributions to your community, or even a project that has redefined the community spirit in a positive way? Have a think about who YOU will be nominating!

by christel at July 18, 2017 09:29 AM

July 13, 2017

freenode staffblog

freenode #live - Registration and CFP now open

The inaugural freenode #live conference takes place at At-Bristol in Bristol, UK from Saturday 28 to Sunday 29 October 2017. Both days will include talks and workshops, and we look forward to this community-focused gathering.

Tickets are now on sale and there is a special Early Bird price until 31 July and we are also taking submissions for the call for participation, we have three tracks to choose from (FOSS and Community, Privacy and Security and Making and Remixing) and we look forward to receiving your submission.

We will be making every effort to ensure that freenode #live is as inclusive an event as is possible and we encourage the minorities within our community to attend and to submit proposals.

What are we looking for?

We are looking for proposals for talks, workshops and other events and as a community-focused organisation, freenode is committed to ensuring that this conference is open to YOUR ideas! The more varied the programme, the better.

Perhaps you have something you would like to share with the community? We welcome proposals on all subjects and at any level of technical expertise, please do not feel constrained by the track subjects. We are keen to hear about your experiments, your discoveries, solutions, achievements and even your failures. We want to hear about “how to get started with X,” “how to use Y,” and “what we learned from working on Z”.

We want you to come teach us something new, we want you to make us think and we want you to make us laugh. Go ahead and submit a proposal for a talk, we’d love to hear your ideas.

Perhaps you could lead a tutorial on something of relevance to the freenode communities or the FOSS ecosystem? Why not throw your ideas in the hat and submit a proposal to hold a workshop?

Have a completely different idea? Is there something you would love to see happen at freenode #live? Let us know! We’re open to all ideas!

We want to hear from you, if you are planning to come to the event then you are very much part of the community and as such this is very much your conference. We want to hear from you regardless of whether you are a seasoned speaker or if this is the first time you are considering getting up in front of an audience. The freenode community is friendly and welcoming.

The CFP closes on 15 August 2017.

Together with our main sponsor and co-host of this year’s event, Private Internet Access, we very much look forward to welcoming you all to Bristol in October!

If you have any questions, please do not hesitate to drop us a line to [email protected]

by christel at July 13, 2017 07:35 PM

freenode #live - Registration and CFP now open

The inaugural freenode #live conference takes place at At-Bristol in Bristol, UK from Saturday 28 to Sunday 29 October 2017. Both days will include talks and workshops, and we look forward to this community-focused gathering.

Tickets are now on sale and there is a special Early Bird price until 31 July and we are also taking submissions for the call for participation, we have three tracks to choose from (FOSS & Community, Privacy & Security and Making & Remixing) and we look forward to receiving your submission.

We will be making every effort to ensure that freenode #live is as inclusive an event as is possible and we encourage the minorities within our community to attend and to submit proposals.

What are we looking for?

We are looking for proposals for talks, workshops and other events and as a community-focused organisation, freenode is committed to ensuring that this conference is open to YOUR ideas! The more varied the programme, the better.

Perhaps you have something you would like to share with the community? We welcome proposals on all subjects and at any level of technical expertise, please do not feel constrained by the track subjects. We are keen to hear about your experiments, your discoveries, solutions, achievements and even your failures. We want to hear about “how to get started with X,” “how to use Y,” and “what we learned from working on Z”.

We want you to come teach us something new, we want you to make us think and we want you to make us laugh. Go ahead and submit a proposal for a talk, we’d love to hear your ideas.

Perhaps you could lead a tutorial on something of relevance to the freenode communities or the FOSS ecosystem? Why not throw your ideas in the hat and submit a proposal to hold a workshop?

Have a completely different idea? Is there something you would love to see happen at freenode #live? Let us know! We’re open to all ideas!

We want to hear from you, if you are planning to come to the event then you are very much part of the community and as such this is very much your conference. We want to hear from you regardless of whether you are a seasoned speaker or if this is the first time you are considering getting up in front of an audience. The freenode community is friendly and welcoming.

The CFP closes on 15 August 2017.

Together with our main sponsor and co-host of this year’s event, Private Internet Access, we very much look forward to welcoming you all to Bristol in October!

If you have any questions, please do not hesitate to drop us a line to [email protected] or hop into #live and chat with us.

by christel at July 13, 2017 07:35 PM

June 29, 2017

freenode staffblog

Policy Updates

When the website was redesigned back in 2016, we made the decision to publish very limited policy for a period to allow us a chance to review our policies and guidelines, most of which were written over a decade ago. We have now completed our review and updated the policies accordingly.

by christel at June 29, 2017 11:26 AM

June 07, 2017

erry's blog

Perl Dependency management with Carton

The perl depenendency management tool Carton allows not only easily separating dependencies for different applications or parts of the system, but also easily deploying an application and handling its dependencies.
Using Carton is extremely straight-forward for anybody who’s used cpanm to manage dependencies before.

Installing and using your application

The first step is to write a cpanfile:

requires 'LWP', '6.26';
requires 'DBIx::Class', '0.082840';
requires 'Config::ZOMG', '1.000000';

Then in order to install the dependencies on your development machine you can run `carton install`. This will install the dependencies in a folder called ‘local’ inside your source directory – that means your globally installed perl modules are untouched!
It’s worth noting that you won’t be able to run your application as you normally would (perl my_app.pl) because perl won’t know where to load the dependencies from. Instead, you will now have to prefix everything with carton exec, e.g. carton exec perl my_app.pl!

Version Control

Once you’ve ran carton install, you’ll notice a file, `cpanfile.snapshot` has been created. That does not only include your dependencies, but also the dependencies of these dependencies, at the correct versions. That means you don’t have to worry about writing down every single version or risking having the version of a dependency change in the future – everything is permanently recorded for you.
You will want to add this file to version control:

git add cpanfile cpanfile.snapshot

If your application ever needs a new dependency, all you have to do is tweak the cpanfile, re-run carton install and commit the changes to cpanfile.snapshot.
When another developer checks out the repository, they have to run carton install to get the dependencies, and use carton exec like you did.

Deployment

There are two ways of installing your dependencies: one is to get the dependencies from the Internet and the other to keep them locally.
If you want to do the former, your deployment process just has to pull the latest version of the repository and do:

carton install --deployment

(And of course run the application with carton exec).
If you don’t want to get the dependencies from the Internet, your deployment process will have to run carton bundle before deployment.

This will bundle your dependencies into the vendor/ folder. This folder will then have to be copied on to your production machine. Once there, you can run carton install --deployment --cached.

In conclusion, carton is an extremely useful and easy-to-use tool. The only part that was difficult for me was figuring out working versions of the dependencies, but that is something one has to do when writing a list of dependencies either way. I certainly recommend it to anyone wishing to use modern Perl deployment practices!

by Errietta Kostala at June 07, 2017 06:49 PM

April 20, 2017

mquin's blog

#500words day 7

Made it to the last day of the challenge. It's been an enjoyable week. I've not been finding it easy to find things to write about but I am proud at having managed it. I think I will try to continue for as long as I can.

I ran again this morning having taken a rest day yesterday. Pleasantly the weather had improved and it was much warmer than the near-zero temperatures I experienced the last time I was out. I feel that I'm getting back into a groove with it and my times have subtly, but steadily improved over the course of the week.

Related to running, I made a useful discovery that Garmin Connect, the web application that supports my GPS has, at some point in the last couple of years, gained the ability to synchronise with Runkeeper. I use both of these tools for a couple of reasons - partly because when I started running initially I was tracking my runs using a smartphone and used Runkeeper to do that, and partly to keep my data in more than one place to mitigate the risk that one of the applications might go away of lose data.

Up until now I'd been making use of a third-party application to pull the log files from my watch and upload them to Runkeeper in a manual process separate from Garmin's automatic synchronisation to Garmin Connect. I'd been struggling to get the site to work properly having made a recent switch from Firefox to Safari as my day to day browser and this led be to discover that this proccess was no longer necessary. Having authorised the connection between the two web applications the synchronisation process happens almost immediately after I connect the watch to my computer.

The outside world continues to be alarming, a situation that I don't imagine will improve in the next six weeks before the election or really any time soon. Despite all the rhetoric about 'control' over the last couple of years it feels to me that there is an utter absence of it, and we have politicians doing what the have the ability to do with little regard for whether what they are doing is going to be good for the country as a whole in the long, or even short, term.

I'm doing my best to stay positive, but I'm increasingly feeling the urge to keep my head down and my friends close in whatever ways I can.

Had a bit of a panic with these writings yesterday as I somehow managed to end up with an empty file in place of what I'd written on day 5. Fortunately after a lot of head scratching and fiddling around with git and Time Machine in the hope of recovering it I noticed that it was still present, unsaved, in a tab in my editor. Phew. I gather that there are a couple of extensions for Atom to autosave work-in-progress so I'll be exploring those soon.

(500 words) 2017-04-20 0815

April 20, 2017 07:15 AM