The Cobenian Blog

Musings on Software and Product Development

10th Anniversary!

On October 2, 2022 Cobenian celebrates its 10th anniversary!

10 years ago I set out on my own without any clients or contracts, with only a laptop and a dream to help clients solve difficult problems with elegant software solutions. A lot has changed since then, but ten years later we continue to strive to help our clients and their customers with their most challenging issues.

I would like to thank our employees, contractors, partners and our clients for making Cobenian a special place to work for the past 10 years!

Building an Elixir library - Part 1

This blog details a library that we recently wrote and some of the design decisions that went into the library. If you are brand new to Elixir this is probably not a good blog post for you to start learning Elixir.

What is it?

The raygun library helps your program capture bugs at runtime and send them to Raygun for centralized error reporting.

There are several competitors to Raygun, such as Bugsnag, Honeybadger, Sentry, Airbrake, and Rollbar to name a few. Many of these services already have Elixir libraries, however there wasn't a library for Raygun yet. This blog post isn't intended to sell you on Raygun vs. its competitors (although it seems like an excellent service), it is intended to help improve the way you write library code in Elixir.

How are errors captured?

The first question we asked ourselves when we wrote this library is how did we envision using the library? We looked at some of the existing libraries for inspiration. We noticed that there were two common patterns. Errors were either captured via a Plug (and thus used with Phoenix) or they were captured as they were logged. We decided that our Raygun library should support both types of error capture.

Sending errors to Raygun

Before we could capture errors, we needed a way to send them to Raygun. Raygun supports many programming languages out of box, however Elixir is not one of them. Luckily the Raygun REST API is very easy to use. It essentially has a single endpoint with "/entries". Errors can be posted as JSON. Requests are secured via an API Key. Successful requests will return a 202 status code. All pretty standard for a web API, except maybe for the 202 status code, but that just signifies that the message was accepted for processing but hasn't been processed yet.

So at its heart, our library is really about the following two lines of code:

{:ok, resp} = <> "/entries", json, headers)
%HTTPoison.Response{status_code: 202} = resp
Notice that there is a dependency on the HTTP client library HTTPoison.

Let's break down those two lines of code. On the first line we post a JSON message to a URL with details about the error, and include our API key in the headers so Raygun knows that it is us posting the error.

If you aren't familiar with Elixir the "@api_endpoint" is a module attribute, which you can think of as a constant in this case. It contains the base URL for Raygun's RESTful API. We then concatenate the string "/entries" onto the base URL using the <> string concatentation operator.

The 'json' and 'headers' variables are both just maps of key/value data. Our headers are defined immediately before the two lines of code:

headers = %{
  "Content-Type": "application/json; charset=utf-8",
  "Accept": "application/json",
  "User-Agent": "Elixir Client",
  "X-ApiKey": Application.get_env(:raygun, :api_key)
We don't want our API key checked into our version control system and we certainly don't want it hard coded in our library, so the API key is read from the a configuration file.

We will cover the JSON data in future blog post. For now, it is only important to understand the the JSON data contains information about the error that occurred.

Once we post the error to Raygun, we expect it to return a tuple with :ok and the HTTP response. If the error is not successfully posted to Raygun a different tuple will be returned and a match error will be generated at this point. If Raygun is unavailable the library can't do anything other than drop the error completely. We drop the error by spawning a process and not linking or monitoring the process. If the process crashes nothing will happen. If you are unfamiliar with the difference between spawn, spawn_link and spawn_monitor we recommend reading the second part of Elixir in Action.

spawn fn ->
  {:ok, resp} = <> "/entries", json, headers)
  %HTTPoison.Response{status_code: 202} = resp
This will be true if any non 202 HTTP status code is returned as well because of the pattern match on the second line.

Programmatically report an error

The library includes the following function for reporting error messages:

def report_message(msg, opts \\ []) do
  |> Raygun.Format.message_payload(opts)
  |> Poison.encode!
  |> send_report
It takes a string message, converts it into a map with the error details, encodes it as JSON (using the Poison library) and calls the send_report function which contains all the code we've looked at so far. We can safely ignore the optional arguments that default to an empty keyword list for now.

We still haven't seen how to generate the JSON payload for an error, but now we can send an error to Raygun with a remarkably small amount of code. Let's look at how we might capture an error next.

Captured errors as they are logged

Elixir's built in Logger module provide a simple extension mechanism called 'backends' that enable developers to handle when particular messages are logged.

def handle_event({:error, gl, {Logger, msg, _ts, _md}}, state)
                when node(gl) == node() do
  Raygun.report_message msg
  {:ok, state}
def handle_event(_data, state) do
  {:ok, state}
Here we see that if a message is logged with :error as the first element of the tuple we will pass the message to our Raygun code. If anything else is logged it will simply not do anything.

We have two options to turn on our capturing errors via Logging. The first way is via configuration:


config :logger,
  backends: [:console, Raygun.Logger]
The second way is to enable the Logger backend programmatically:


Capturing errors via Plug

Typically Plugs are used to sequentially process an incoming HTTP request. They can short circuit and not trigger the remaining plugs in the pipeline, however this is not the behavior we want for our case. If a plug before our plug fails, our plug is likely to be skipped. If a plug after our plug fails we won't ever capture the error.

What we really want is a way to wrap every invocation in the pipeline and if it fails, then pass the error to our Raygun code. This is possible with a macro:

defmacro __using__(opts) do
  quote location: :keep do
    @before_compile Raygun.Plug
This simply expands the following macro prior to compiling the module that uses our module. Wrapping the code is slightly more advanced, but it isn't very complicated. We try to run the code being wrapper by calling super. If it throws an exception we catch it. The first thing we do is caputre the stacktrace associated with the exception. Both the exception and stacktrace are passed to our Raygun code and then we re-raise the original exception and stacktrace.

defmacro __before_compile__(env) do
  quote location: :keep do
    defoverridable [call: 2]

    def call(conn, opts) do
      try do
        super(conn, opts)
        exception ->
          stacktrace = System.stacktrace
          Raygun.report_plug(conn, stacktrace, exception,
              env: Atom.to_string(Mix.env),
              user: Raygun.Plug.get_user(conn,
          reraise exception, stacktrace
In order to enable Raygun via Plug, we have to add our macro to the router (in Phoenix for example).

defmodule YourApp.Router do
  use Phoenix.Router
  use Raygun.Plug

  # ...
So yes, it really is that easy to configure your Phoenix applications to send all uncaught errors to Raygun.

In part two we will cover how to turn an actual error into the data structure that Raygun expects.

Increasing Diversity in the Elixir Community

Are you interested in increasing the diversity in the young Elixir community? Are you willing to contribute to make it happen?

We are teaching the second Elixir Mastery class on November 4-6 in Washington, D.C. For our first class we had a corporate sponsor pay for two scholarships. We wanted to double the number of scholarships for our second class.

We have already given out four scholarships for the class and we have several people on the waiting list. Interest in the class has been high. We would like to make scholarships available to more people. We are now seeking additional corporate sponsors that will open doors for more women, minorities and people without the financial means to attend the class.

What does it cost?

Tuition for the class is $1000 per student. We are asking for $600 per student to cover the costs of the class. We will cover the remaining $400. If you are unable to contribute the full amount we can pool contributions together so no contribution is too small.

Some students may require travel and lodging as well as tuition for the class. A sponsor may choose to additionally cover the cost of travel and lodging. We currently have one student on the waiting list from Portland.

What does the student get?

  • 3 days of professional training
  • 2 books on Elixir
  • Breakfast, lunch and snacks
  • Access to all the class materials

What do you get?

  • 1 more member of the Elixir community
  • Your logo on the Elixir Mastery website as a diversity sponsor
  • 2 tweets of gratitude for sponsoring a seat in the class


If you are interested in contributing please contact to learn how you can help someone join the Elixir community.

Handing over a self contained Elixir program

One of the reasons for the rapid adoption of the Go programming language is that it can be crossed compiled down to a set of architecture specific static binaries. The deployment model for these programs is often described as 'scp' because all it takes to distribute the progams in a runnable state is to copy a single file. Of course, this is sometimes an over simplification if there are config files, external dependencies (database anyone?), etc. but for many programs the deployment model really is that simple.

Recently we wanted similar behavior for some internal development tools (this blog focuses on CLI programs) that we're writing at Cobenian in Elixir. With some help from one or two people on the #elixir-lang freenode channel we were able to get it working, so we thought it might be helpful to document for other folks in the community.


The Details

Create a mix project

mix new cli_tool
The output should look something like:
* creating
* creating .gitignore
* creating mix.exs
* creating config
* creating config/config.exs
* creating lib
* creating lib/cli_tool.ex
* creating test
* creating test/test_helper.exs
* creating test/cli_tool_test.exs

Your mix project was created successfully.
You can use mix to compile it, test it, and more:

    cd cli_tool
    mix test

Run `mix help` for more commands.

Write your code with a main/1 function


defmodule CliTool do
  def main(args) do
    args |>

  def print_arg(arg) do
    IO.puts "arg: #{arg}"
Be sure to have 1 parameter in your main function.  

Add exrm dependency


defp deps do
  [{:exrm, "~> 0.17"},
   {:relx, github: "erlware/relx"}]

Get the dependencies

mix deps.get

Build the release

mix release
The release will be built under a directory called 'rel' within a sub-directory named after your project. The single artifact that we care about is the tar bundle in that sub-directory. In our case that is:

Distribute the release

There will be a gzip'd tar under rel/cli_tool/cli_tool-0.0.1.tar.gz. The directory name and file name will be different if you used a different project name and/or version. So let's pretend we want to scp the file to

scp rel/cli_tool/cli_tool-0.0.1.tar.gz
Now, log into that machine and prepare to run our code.

ssh$> mkdir cli_tool$> cp cli_tool-0.0.1.tar.gz cli_tool/$> cd cli_tool$> tar xvfz cli_tool-0.0.1.tar.gz
Note that the tar bundle does NOT include a single directory with everything in it. So you will probably want to create a new directory and untar the contents there like I did above.  

Run the release

If we were to manually run our code on the command line it would look like this:

bin/cli_tool escript lib/cli_tool-0.0.1/ebin/Elixir.CliTool.beam "foo" "bar"
The output should look like this:
arg: foo
arg: bar

Slightly Improved

That's not very user friendly, so let's improve it a tad. In your project root create a file called priv/



DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )

cd $DIR
../../../bin/cli_tool escript lib/cli_tool-0.0.1/ebin/Elixir.CliTool.beam $@
And don't forget to make your file executable.

chmod +x priv/
Now you can rebuild your release and re-distribute it following the steps outlined above. Then you can run the bash script and pass it arguments:

lib/cli_tool-0.0.1/priv/ ARGS GO HERE
At this point, you'll most likely want to put lib/cli_tool-0.0.1/priv in your path.

Cleaning Up

If you need to clean out a previous release you built you can use the following command prior to running 'mix release' again.

mix release.clean --implode

Conclusion and Feedback

So this is admittedly not as convenient as having a single, static binary like you get with Go. I'm also not aware of a way to cross compile Elixir releases right now.

However, this isn't too bad if your team uses Macs or Linux. You simply distribute a tarball to your team members, they untar it and add a directory to their path.

So while this isn't perfect, it certainly is acceptable for our team given that it means we get to write more tools in Elixir.

Finally, I suspect that the Elixir community will have plenty to say about this process and hopefully they can make some suggestions of ways to improve it.

Why is Elixir pattern matching special?

Perhaps by now you've heard that Elixir is the next big thing? Or maybe you think it is just the latest fad, but either way it has some truly interesting features that will be new to you if you've only ever done Object Oriented Programming (or only programmed in Java, Ruby, C#, Python, PHP, and JavaScript).

Years ago I had a conversation with Josh Suereth at a conference and we compared pattern matching in Scala and Erlang. It was a really fun conversation and in it I made the assertion that Erlang's pattern matching was more powerful than Scala's. Josh asked how that could be given that Scala provides an API that enables the developer to determine what the match behavior should be! I frankly was unable to explain it at the time which to me was a sign that while I was perhaps on to something, I didn't yet understand it well enough to articulate it.

Many years have passed and in the last several months I have taken a deep dive into Elixir because of my previous interest in Erlang. I also took some time to study Prolog since that conversation and now I finally feel like I can explain why pattern matching is so powerful and this blog post attempts to express my thoughts. This post is starting to get a little long, so I've left out a ton of details at every turn, but I tried to hit on the major points here.

By the power of Prolog...

The early implementations of Erlang were written in Prolog, so some uses of Prolog semantics shine through in Erlang and Elixir to this day. Pattern matching is one such case.

In order to begin to appreciate pattern matching you have to understand what it has to offer and what it is good for. Pattern matching in Erlang and Elixir is more than the de-structuring you'll find in other programming languages like Scala and Clojure.

Some uses of pattern matching

  • "assignment"
  • assertions
  • de-structuring
  • smarter case statements
  • removing conditionals

So let's look at each one in turn before we try to draw some conclusions based on the big picture.

The "how can I make this true?" operator...

a.k.a. "assignment"

We recently taught a two day Elixir class at Living Social in Washington D.C. (Don't worry, we're going to offer it again this fall if you missed it! Follow Elixir Mastery to learn more when we finalize the details over the next few weeks.) One of the first things we covered is that '=' appears to work just like assignment in other programming languages, but upon closer inspection it is not at all the same.

In Elixir the '=' operator does NOT assign the value on the right hand side to the variable on the left hand side. Instead it attempts to unify the left hand side and right hand side into a statement that is true. This means that

x = 3
should not be read as "set the value of x to 3" but instead you can think of it as "how can I make this statement true?" Well, one way to that this statement can be true is if x has the value 3. So let's set the value to 3.

At first glance, this appears to be a useless parlor trick and of no real significance, but let's dig in a little deeper now.


One thing that many developers new to Elixir find odd is that the following code works.

3 = x
If you've only ever worked with the likes of Java, Ruby or JavaScript then this may seem odd to you. However, if you remember what the '=' operator means, you'll see that as long as the value of x is 3 then this statement will be true.

So what exactly happens when the value of x is not 3? The TL;DR version is that a MatchError is thrown. That means that I can use this pattern to assert certain expected states in my program. Concise, but is it really that much better than an explicit call to 'assert' in a program? I think that it would be hard to argue that case convincingly. So this is nice, but let's keep moving through the various uses of pattern matching.


De-structuring allows a developer to rip apart data structures and set the value of many variables at once.

{first_name, last_name} = {"John", "Doe"}
Again, using our "how can this statement be made true?" logic, we can see that if the variable 'first_name' had the value "John" and the variable 'last_name' had the value "Doe" then this statement would be true. So that's exactly what Elixir does.

De-structuring is very useful for two reasons in my opinion. The first is because it allows the developer to do several important things in one concise step, namely dig into the parts of the data structure that we care about. But secondly, and every bit as importantly, we as people have a very easy time understanding de-structuring on an intuitive level. Typically doing several things (like setting the values of multiple variables) in one step in code would be confusing and tough to reason about when a bug arose. However, with de-structruing that is often not the case because our brains are good at recognizing patterns and we can see the symmetry between the left hand side and right hand side.

Also really helpful is that we can do destructuring on function parameters.

def greet({first_name, last_name}) do
  IO.puts "Hello #{first_name} #{last_name}, how are you?"
In fact, we can nest pattern matches here and we can even do this convenient match as well:

def greet(person={first_name, last_name}) do
  IO.puts "Hello #{first_name} #{last_name}, how are you?"
Notice that in addition to the de-structuring of the tuple itself, we've also used pattern matching a second time to assign the value of the entire tuple itself to the variable 'person'.

smarter case statements

In some languages (Java in particular) you are limited in what you can match on in case statements. Elixir gives you full pattern matching for case statements:

case user do
  {first_name, nil} -> IO.puts "Last name is required."
  {nil, last_name} -> IO.puts "First name is required."
  u -> IO.puts "Hello #{elem(u, 1)}"

removing conditionals

Imagine that you have a function that is responsible for adding three integers in a tuple. Easy, right?

def triple_adder({a, b, c}) do
  a + b + c
But what if you also wanted to be able to accept a list of three numbers and add them as well? We might end up with something like this:

def triple_adder(list_or_struct) do
  if is_list(list_or_struct) do
    [a, b, c] = list_or_struct
    a + b + c
    {a, b, c} = list_or_struct
    a + b + c
Sure, this is a contrived example, but sometimes the number of nested if's becomes quite complex and we end up with messy code.

Elixir identifies functions by module, name and arity (just a fancy word for the number of parameters), but you can define a function in multiple pieces and the first one that matches is the one that will be executed. Therefore, we can simplify the code we just wrote and remove the conditional completely.

def triple_adder({a, b, c}) do
  a + b + c

def triple_adder([a, b, c]) do
  a + b + c
Now when we call triple_adder the decision on which block will execute will depend on the arguments. Again, this example is very simple, but in real code this pattern removes a remarkable number of conditionals from your code and leaves very short, concise and readable functions in their place.

This function dispatching based on pattern matching is what makes pattern matching in Elixir feel magical in my opinion and what puts it ahead of simple de-structuring.

Some people will argue that this form of function dispatch is actually a code smell and that all dispatching should be encapsulated in types. Pattern matching can look like a glorified case statement at times which would mean that every time a new type was added old code would have to be re-written to account for the new type(s). And in object oriented code that is the case. However, in Elixir we don't define new classes as types, the set of types is fixed to what the language provides for us. Abusing structs is one case where this problem can arise in Elixir, but for the vast majority of cases you will simply describe the data structures you have and how you will handle them. In other words, you care about the structure of your data and not the type.

One final point is that this form of pattern matching changes the locality of the business logic in your code. With encapsulation the logic is spread out across your different type implementations (classes). However, with pattern matching your logic tends to centralize in the same place, where the matches occur. Depending on the complexity of your code and your personal stylistic preferences, you may legitimately want to have the logic in one place or spread across multiple places in your code base. So I don't chalk this up as a pro or a con, but rather a more nuanced 'it depends' type of situation. Not to mention that you can use structs and protocols in Elixir to get OOP like behavior when appropriate.

The end of this long blog post

In my opinion it is this final case and the fact that pattern matching is pervasive that sets pattern matching in Elixir apart from other languages. The code I write in Elixir naturally has much shorter functions and far fewer conditionals than in any other programming language I've used.

Pattern matching in Elixir is powerful and interesting. I sometimes use the analogy that pattern matching is sort of like having an AI that just knows how to do certain things with your code that you don't want to have to take the time to describe, but that you intuitively just know. Sort of like those students who know the right answers on their math test, but who can't explain exactly how or why they know the answer is correct. Pattern matching is one of my all time favorite uses of Prolog, it uses logic programming to help make your programming simpler!

Elixir Mastery class May 28-29

We're very pleased to announce that we will be hosting an Elixir Mastery class in northern Virginia on May 28-29th.

No Elixir experience is required but the class is an intensive programming class. We'll cover the language and its tooling for 1 1/2 days and then we will have a brief introduction to Phoenix and OTP.

If you have any questions email or tweet to elixirmastery.

Space is limited so reserve your seat now. Register for Elixir Mastery


We're proud to introduce, the simplest way to share your web design work with your clients. is a product that came from our desire to have a tool that allowed us to share new website designs with clients as we iterated over various versions, incorporating feedback along the way.


It allows you to iterate over HTML designs with your client. It is easy for you and even easier for your client. It is like having version control as you get feedback from your clients.

The Problem

We started by sharing our work over email. It worked, but it didn't work well. Feedback was spread out of dozens of emails, clients confused versions and we couldn't tell who had actually taken the time to look at our work. We realized that we needed something better. We went looking for something that would solve our problem. An open source project, a software as a service product, anything really. We were surprised that we couldn't find what we were looking for, so we decided to build something for ourselves.

The first attempt

We stood up our own nginx server. We had to make sure that the work we did for clients was kept private, so we turned on authentication. This worked for a little while, but every time the client requested that someone else be included or removed we had to make changes to our configuration. We had to keep our work separated by client because we certainly didn't want to accidentally share work for one client with another client. We still had to put together a distribution list and send emails to let clients know when a new version was available. Clients couldn't keep track of their passwords so we were constantly resetting them. In a lot of ways, it was a step back from email because the administrative burden put on us was higher and clients had to ask us to reset their password before they could see the new version.

Ah ha moment

Finally, after an internal brainstorming session, we realized that we could fix this problem and that it would be useful for other companies as well. Web designers, web developers, even teams within large companies could benefit from a tool that allowed web designers to distribute their work privately to their clients.

Guiding Principles

We decided to set a few guiding principles during our architecture design phase. First, the tool must make it easy for our clients to use. Not just easy, but beyond easy. No more passwords for them to remember, no unzipping files and having to figure out what to do with the contents, no more confusion about which version was the newest.

Beyond Easy

In clients receive an email when you notify them that a new version is available and they click on a link to see your latest work. They have nothing else to do.


We needed it to be private, but as soon as we added passwords our clients couldn't ever seem to remember them.

How is a link secure? We generate a unique access key for every client for every notification they are sent. This access key + HTTPS is what allows us to know whether or not the client can be trusted and what content the client should see.

Content Generation

We decided very early on that designers and web developers want to use their own tools. We don't force you to try to use to generate your work, we just help you get it to your clients.

Now you have your content, so what's next? Well, we thought about lots of different ways that you could get it in front of your clients. In the end we decided to have users zip up their content and upload it. But when it is time to share your work with the client, what exactly will the client see? Just like programming languages have a "main" method the web has "index.html" as its starting point. So the one requirement we place upon uploaded content is that it contains a index.html file.

zip, click, demoed

Now you probably see why we say, "zip, click, demoed." Web designers zip up their content, click a button to upload it and click another button to notify their clients. The client clicks a link and the work is demoed. Ok, so technically that's zip, click, click, click, demoed, but that just doesn't have quite the same ring to it, does it now?

One of the questions we are asked the most often is why we don't offer a free plan? "Haven't you ever read the software as a service rules?" "Don't you know your conversion rates will be much higher?" Of course, we're aware that most software services offer a free plan these days, but for every rule there is an exception. In our case, the exceptional case is security. could be abused by spammers and folks who want to distribute malware via malicious sites. We certainly don't want to enable these people or be associated with such activity. So we've decided that we will only support paid plans and we've tried to set the price point such that it disincentivizes anyone who would try to abuse our system in such ways.

Technology Stack

All the geeks out there want to know what we used to build So here it is in all its glory.

We use Ember.js and Ember data to have a fast web application that consumes data via RESTful services. If you've ever heard of Twitter Bootstrap you can probably tell that we use it as well. For our backend, we use Phoenix, a web framework written in Elixir. We're huge fans of Elixir and the Erlang ecosystem that it is built on. In fact, we run our local Elixir meetup. We use Postgres for the database and we're deployed on AWS. Was it a fun tech stack to work with? You'd better believe it! Ember had a learning curve and it took the most effort to become proficient in, but you'd be hard pressed to find a better stack in 2015.


So there you have it. Why we decided to build, some of the design decisions we made along the way and some high level information about our tech stack. We hope you'll check us out at!

NoVA Elixir Meetup

The NoVA elixir meetup is off and running. We've had several excellent meetings and we are always looking for new members to join who are interested in Elixir. We generally have a short presentations and we focus on writing code the rest of the time. The group is very beginner friendly so come on out if you'd like to learn more about Elixir!


The latest vresion of the RRDP Draft RFC is available for review!

Speaking at BigConf

Cobenian founder Bryan Weber will be giving a talk on using Python for data analysis at BigConf on March 28, 2014.

Proud Sponsor of Gophercon

We're proud to be sponsoring our second conference of 2014, Gophercon in Denver, CO from April 24th - 26th.

Proud Sponsor of PyTennessee

We're proud to be sponsoring our first conference of 2014, PyTennessee in Nashville, TN from February 22th - 23th.

Cobenian Turns 1 Year Old

We're really thankful to everyone who was involved in an awesome first year. We look forward to serving you even better in the years to come!

Proud Sponsor of Clojure Conj

We're proud to be sponsoring our second conference of 2013, Clojure Conj in Alexandria, VA from November 14th - 16th.

Proud Sponsor of Monitorama

At Cobenian we use open source software all the time so we're happy to give back to the community by sponsoring Monitorama, an open source monitoring conference and hackathon in Boston, MA on March 28th & 29th. The conference is sold out this year, but be sure to check it out next year and follow along this year on twitter by searching for #monitorama.

Cobenian at NANOG 57

We had a great time at our first NANOG! The hallway track and security talks were really excellent. There was even a talk from Merit about using RPKI to populate an IRR! If you ever attend NANOG in the future come and introduce yourself, we'd love to talk.

Slides from NoVA Networkers

Here are the slides from tonight's presentation on BGP, RPKI and BGPSec. More information about the meetup can be found at NoVA Networkers.

RPKI at Mozilla

The operations team at Mozilla has a new blog post out about their use of RPKI. They have some particularly nice comments about how easy the system is to use. It warms our hearts to see that our hard work, as part of the team that implemented RPKI at ARIN, was recognized by someone as easy to use because we spent a lot of time trying to make a very complex system usable. We recently updated our documentation on RPKI & BGPSec so be sure to check it out!

Speaking at NoVA Networkers

Cobenian founder Bryan Weber will be speaking in Reston on January 23rd on BGP, RPKI and BGPSec. More information about the meetup can be found at NoVA Networkers.

Slides from DevIgnition

Slides from the Golang talk can be found at Safer Systems Programming with Google Go. The first half covers BGP and how Syria was taken offline recently. The second half covers how we might model BGP in Go.

Secure Route Origination

If you would like to learn more about what you can do to help secure BGP routing on the Internet, check out our RPKI documentation. We also offer a free RPKI browser that allows you to view the essential information inside the X509 certificates, CRLs, manifests and ROAs that can be found in an RPKI repository. It can be found at This is a bare bones application that was developed in under 24 hours. More features will be added over time.

Contact Us

(703) 828.5180
PO Box 1009 Centreville, VA 20122