I released a small project this morning, Disposamail, which I created between last night and this morning. Disposamail is a web application that allows you to grab a temporary email address and use that address while you’re still on the Disposamail website. Once you leave the website, the address is released, the mail server stops accepting mail for the address, and any emails that were received are lost forever.

Disposamail is written in Node.js and uses several third-party modules that provide a lot of the functionality:

  • Haraka – Haraka is an SMTP server with an extensive plugin architecture which ultimately made this entire project possible.
  • socket.io – socket.io makes sending data between the server and the client easy, using the best method possible.
  • MailParser – Used for parsing raw emails into its various parts.
  • Phonetic – Generates phonetic names for easy to remember email addresses.
  • forever – Making it easy to keep a Node.js script running in the event something bad happens.

The best part of this project, though, is that I’ve released the code under the AGPL. You can checkout the code on GitLab.

Update: Disposamail can now handle attachments!

Verizon Email API Vulnerability

A critical vulnerability has been found in Verizon’s email API which basically allows any user to access any other user’s email, given they know how to properly send the requests to Verizon’s server. Randy Westergren noticed this vulnerability when he was proxying requests from his device (presumably to see what some apps were sending to their motherships) and found his Verizon user id within the request headers. By changing his user id to the user id of another user, the server responded with that user’s information.

This is why you should always sanitize user inputs, and by “sanitize” I don’t necessarily mean preventing things such as SQL injection (though you should do that as well), I mean that you should check any and all input to make sure that the user can actually do the action requested, even if that input came via your own app and wasn’t technically “user input”. Had Verizon properly checked the username against the user’s session, or better yet not even sending the username and just use the user id that is in the user’s session (assuming they’re using some kind of session functionality), then this would have not been an issue. At least they took care of fixing it quickly once the issue was reported to them, which was two days according to the article.

One last thing is that you really shouldn’t be using your ISP-provided email in the first place as you’ll most likely lose this email address when you switch to a new provider. Please, just use something like Gmail instead. Switching to a new ISP shouldn’t be like moving in regards to updating your information literally everywhere.

Email Tracking

In both my personal and professional life I’ve had people ask me how to be able to determine the status of an email, or to use a service that can determine the status of an email. While being able to check what the status of an email is, such as if the email was delivered to the user’s inbox, if the user has read the email, or if a user has clicked a link within the email, all of the methods of doing so produce so many false positives and negatives that it’s really not worth going through the effort to implement such a system. The following are the general methods for checking the status of message as well as their downfalls.


Checking to see if an email was delivered to the mail server is easy, it’s a simple as checking your SMTP logs to see if the mail was delivered to the receiving mail server. It’s not really much harder than that.

The problem with this, however, is that servers may “accept” the mail only to put it in a user’s spam folder, or worse yet drop the email without notification. Dropping email after can happen without notification if the mail server doesn’t check it for spam, etc., until after closing the connection with the sending mail server, and the reason why you won’t get a notification back is to reduce backscatter. As for the spam folder, you can’t really do anything about that other than ensuring your SPF and DKIM records are set appropriately, signing your emails with said DKIM keys, and ensuring that the content of your emails don’t look “spammy”.

The only good notification you may get is a bounce message if the user’s mailbox doesn’t exist, if it’s full, etc., however you’ll normally only get those within the same connection the email is being sent with. If you do happen to get them after closing the connection you should probably contact their administrator as they’re contributing to the backscatter problem.

Read, Click, etc.

You may also want to check if a user has read your email or not, which could be useful in some circumstances. The following are the methods to do so and their downfalls.

Read Receipts

One way to accomplish this is via read receipts. Basically, a read receipt is an email sent via the users mail application when they open an email which is sent back to the sender. The senders email application then links this back to the original email to display a notification to the user indicating that the mail has been read.

While this sounds great, the problem is that this method does not always work. Read receipts can be turned off in the majority of circumstances, and additionally most systems default this setting to off (i.e., it had to be explicitly enabled), or do not even have the capability to do this at all. For instance, Google’s FAQ about read receipts indicates that read receipts are only available on Google Apps accounts if an administrator enables it, and is not available on personal accounts. Additionally, Google’s FAQ states the following:

Do not rely on read receipts for certifying mail delivery. Although read receipts generally work across email systems, you may sometimes get a receipt for an unread message or not get a receipt even though the recipient has read the message.

Remote Images

The next trick is to use a remote image within the email to signal to a server that the email has been read. When the email is opened, the users browser or mail application will load the remote image allowing a server to tell if the email has been read.

The flaw with this method is that most mail applications will not display remote images, and most webmail systems will not display remote images either. Google’s FAQ for Gmail indicates that images may be shown, however that’s only after Google determines them to not be malicious. What this implies is that, while images may be displayed, Google checks the images before ever showing it to the user, and therefore it’s possible the server gets a notification that the user has opened the email when in reality the user never actually opened it.


I’m not even going to go here as including JavaScript within emails is a good way to get yourself blacklisted, and it’s not going to work anyway because most providers will strip any scripts out of the email before displaying it to the user. See the Super User question about this.


You could use links that contain a unique identifier in them, and then when he user clicks on the link, the server would be able to see that unique id and mark the email as read. This is about the only method that may actually work and make sense, however it has some of the same problems as remote images. The mail server could check the links to see if it’s linking to malware or something malicious, and your mail server may treat that as the user clicking on the link if that’s not accounted for.

The problem with this, however, is that if the user got this far, they’re probably on your site doing whatever it is you emailed them about, which you should be able to track already, so it’s kind of pointless short of email marketing campaigns that link to somewhere you do not control.


Email “status” tracking is inaccurate with false positives and negatives all throughout the process. If you understand these limitations and still want to track your users, then by all means go right ahead, just don’t get mad when your numbers are a little off from what you thought they would be. Lastly, if you’re running a third-party mailing service please make sure your users understand this as well.

CyanogenMod 11

CyanogenMod has been working on version 11, KitKat, for almost a year now, and decided that it was probably stable enough to use day-to-day. Therefore, my phone (Samsung Galaxy Note II) got upgraded to CyanogenMod 11 M9 a few days ago.


So far it’s been pretty stable. The only snag I ran into was having to upgrade to TWRP 2.7 or higher due to KitKat’s SELinux requirements, which versions of TWRP prior to 2.7 did not support. See my previous article for information on how to get CyanogenMod on your Note II.

Comcast Data Caps – Part 2

I received the following email from Comcast on December 20th:

Dear Comcast Customer:

This is a Courtesy Notice from Comcast to let you know that you have reached 90% of your 300 GB monthly data plan for your XFINITY Internet Service. As of 12-21-2013, you have 30 GB remaining for this calendar month.

For more information on your data usage plan and to view details of your current data usage, please visit http://xfinity.com/mydatausage.

Thank you for choosing Comcast!


Comcast Cable

Then, yesterday (December 22nd), I received the following email from Comcast:

Dear Comcast Customer:

This is a Courtesy Notice from Comcast to let you know that you have reached 100% of your 300 GB monthly data plan for your XFINITY Internet Service. Additional usage will incur overage charges.

For more information on your data plan and to view details of your current data usage, please visit http://xfinity.com/mydatausage.

Thank you for choosing Comcast!


Comcast Cable

Let’s see what my bill looks like next month…

Multicraft Backups

I don’t really like the way Multicraft handles backups. If you’re using a non-official server such as Bukkit or Spigot, and use a plugin that allows you to have multiple worlds (such as Multiverse), then Multicraft can’t handle backups of those worlds. Additionally, even if you allow Multicraft to backup all your Minecraft servers, those backups are still sitting in the individual server directories.

So, I wrote a script that pulls the list of servers via the Multicraft API, then proceeds to tar up the entire server directory for each server. Then, it uploads those backups to an rsync server. This is a much better backup solution as if anything were to happen with the server, such as a hard drive dying, then your worlds are gone.

Here’s the script: https://gitlab.com/snippets/3342 or https://pastebin.com/z0FsQmCi

Note that this script assumes that all Minecraft servers for a given Multicraft installation is on the server in which the script is running. This script will need to be modified to support multiple physical servers if used on an installation with more than one physical server.

Comcast Data Caps

I received this email from Comcast yesterday:

Dear XFINITY Internet Customer:

At Comcast, we recognize that our customers use the Internet for different reasons and have unique data needs. Starting December 1, 2013, Comcast will trial a new monthly data plan in this area, which will increase the amount of data included in your XFINITY Internet Service to 300 GB and provide more choice and flexibility.

What this means for You

The vast majority of XFINITY customers use far less than 300 GB of data in a month. If you are not sure about your monthly data usage, please refer to the Track and Manage Your Usage section below.

We want our customers to use the Internet for everything they want and your service will not be limited to 300 GB. While we believe that 300 GB is more than enough to meet the Internet usage needs of most customers, Comcast will automatically add blocks of 50 GB to your account for an additional $10, should you exceed the 300 GB included in your plan in a month.

In order for our customers to get accustomed to this new data plan, we are implementing a three-month courtesy program. That means you will not be billed for the first three times you exceed 300 GB included in the data plan during a 12 month period. Should your usage exceed 300 GB a fourth time during any 12-month period, an additional 50 GB will automatically be allocated to your account and you will be billed $10 for that data and each additional 50 GB of data in excess of 300 GB during that month and any subsequent months your usage exceeds 300 GB.

Please note that this is a consumer trial. Comcast may modify or discontinue this trial at any time. However, we will notify you in advance of any such change.

For more information on all our data usage plans, please visit www.xfinity.com/datausageplan/expansion

Track and Manage Your Usage

Comcast provides you with several tools to easily track and manage your data plan:

  • Usage meter – Use the usage meter to see how much data you have used – available at www.xfinity.com/usagemeter
  • Data Usage Calculator – Estimate your data usage with this tool available at www.xfinity.com/datacalculator. Simply enter information on how often and how much you typically use the Internet, and the calculator will estimate your monthly data usage.
  • In-Browser Notices and Emails – We will send you a courtesy “in-browser” notice and an email letting you know how much of the data included in your monthly plan you are using.

If you have any additional questions about the new data usage plan, please visit www.xfinity.com/datausageplan/expansion

Thank you for being an XFINITY Internet Customer.



I don’t really have much to say about this other than I don’t believe that internet should be sold with a speed cap and a data cap; it should be one or the other. It looks like I might be lowering my plan from 105Mbps to 50Mbps to make up for the cost difference. To show just how much this is going to affect me, the following is the output of vnstat for last month, however I believe I installed this on the 4th of October so it’s slightly off:

                       rx      /      tx      /     total    /   estimated
       Oct '13    463.28 GiB  /   10.73 GiB  /  474.01 GiB
       Nov '13    143.59 GiB  /    4.37 GiB  /  147.97 GiB  /  626.46 GiB
     yesterday     23.14 GiB  /  769.36 MiB  /   23.89 GiB
         today      1.36 GiB  /   14.92 MiB  /    1.37 GiB  /   31.32 GiB

So this is going to affect me quite a bit. Note that none of this is torrenting or anything like that, this is just normal usage for a family of four that uses Netflix instead of cable television.

Minecraft Server Installer

Last night, I wrote a script to install the Spigot Minecraft server on a server (or desktop) running CentOS 6. It will install all dependencies on a fresh installation as well as creating a user to isolate the server and setup monitoring with monit, and automatically start the server. This is ideal if you wanted to setup a server from a fresh virtual machine from a VPS provider.

Note that this starts the server with the default Minecraft server settings. You will need to stop the server via the /opt/minecraft/stop.sh script and modify the settings file if you want alternate settings.

Link: minecraft-installer.sh

Updating Postfix Aliases from Horde

Last week, I setup my own mail server using Postfix, Dovecot, and Horde. The only problem I have with this setup is there’s no way to update Postfix’s aliases directly from Horde. This may not really seem like a problem, however I have several email addresses with new addresses being created all the time for email forwarding, etc., so that I can stop mail from being delivered by removing an email address from the aliases file should I somehow lose control of the original email address. To solve this problem, I wrote a script that would pull all horde identities and email aliases and use those to populate the virtual alias maps file.

Note that this only works properly if your horde login is a user on the underlying system that can receive mail, i.e., you’re somehow authenticating directly to the passwd file or indirectly through Dovecot or another server capable of authenticating with the passwd file, though this can probably be modified as well to use virtual identities if that’s what you’re using.

Anyway, here’s the script: https://gitlab.com/snippets/3341 or https://pastebin.com/fCJv217t


I setup 4 DNS servers on Monday using RamNode and BuyVM, and have been monitoring the performance of them since then. I am running PowerDNS with MySQL backends, replicated from the Atlanta node to all other nodes using MySQL master-slave replication. Additionally, the Atlanta node is running Poweradmin.

The physical node configurations are as follows:

RamNode – Atlanta
2 CPU Cores – 3.3Ghz

RamNode – Seattle and Netherlands
1 CPU Core – 3.3Ghz

BuyVM – Las Vegas
2 CPU Cores – 2Ghz
30GB Disk (non-SSD)

Monitoring results from the last 24 hours on each node from another node in RamNode’s Seattle location:

RamNode – Atlanta


RamNode – Seattle


RamNode – Netherlands


BuyVM – Las Vegas


The RamNode nodes are rather consistent, short of usually small spikes every hour or so. The BuyVM response times vary widely, however; more than I was expecting. To show just how much the responses are varying, here is a 4-hour capture of the response times:


I haven’t looked into why the BuyVM node is performing worse than the RamNode nodes, however I suspect it has something to do with the fact that it has a lower clock speed and its disks are non-SSDs versus the RamNode SSDs.

Update: I moved the BuyVM DNS node to DotVPS’s UK location, and it is performing much better than the BuyVM node. Here is the configuration for that node:

1 CPU Core – 2.27Ghz
25GB Disk (non-SSD)

And the performance graph for that node:



So I’m still not sure what the issue is with BuyVM. It still very sporadic up until I turned off that node. Maybe their upgrade to SSDs will fix them, however I’m not sticking around to find out.

Update: DotVPS is no longer in business and therefore links have been removed.