07 Apr 2020 » IoT Sniffing

I bought a 25€ smart plug from a DIY shop, mostly because it was very cheap. I hadn’t poked at any IoT devices. I was struck by both the price and the complete lack of branding on the box - no idea who owns the infra, who makes it, etc. Getting it set up and on the home wifi was easy. I started poking at their app with mitmproxy. First interesting thing - it logs my phone’s location each time I use the app.

Steps for self:

  1. Change linux box to forward packets, and change dhcpd clinets to send traffic though the linux box. Sniff with:

     tcpdump  -i enp2s0 host 192.168.1.20 and port 8886 -w /tmp/8886
    

    and similar.

  2. Use nftables to redirect packets from my iphone and the device to port 8080 on the linux box.

     nft add table ip nat
     nft add chain ip nat PREROUTING { type nat hook prerouting priority 0 ; }
     nft add rule ip nat PREROUTING tcp dport 80 counter log redirect to :8080
    

    There are better ways. One problem I found was that a counter or log only rule in a type nat hook didn’t do anything, unless you had a rule that actually changed packets.

  3. Use mitmproxy to peek at traffic. Add a cert to the iphone with mitm.it.

     SSLKEYLOGFILE="$PWD/.mitmproxy/sslkeylogfile.txt" mitmproxy --mode transparent --showhost --listen-port 8080
    

    I had to repair debian bug 928749 with

     ln -s /usr/share/fonts-font-awesome/css/font-awesome.min.css  /usr/lib/python3/dist-packages/mitmproxy/addons/onboardingapp//static/fontawesome/css/font-awesome.min.css
    

    I had hoped to try wireshark’s support for decoding tls.

  4. Try stuff out!

It turns out that it is a tuya device. See API calls or firmware.

At this point, I stopped poking, as I didn’t actually have any idea what to use the device for :).

comment

05 May 2013 » network

Many years ago, I started using a 10/100MBit ethernet switch at home. At the time, gigabit switches were about 5-10x as expensive, and it didn't seem worth it. After all, spinning disks in ideal usage do about 100MBit, and I couldn't imagine that I'd end up wanting to go from RAM to RAM on two different machines.

Anyway. I've been copying SD cards from our trip to Utah from my laptop (which has a SD port) to my file server. iftop pointed to the reason the copies were taking a while - I was saturating my switch. Upgrading to gigabit was pretty cheap, so I did that. I suspect that this will double throughput, because the file server (which is a raid5 array of disks) is at about 55% io usage.

Anyway. I was just surprised that I've finally hit that bottleneck. And that waiting for prices to come down has paid off. Yey.

comment

24 Mar 2013 » reader replacement

I've spent some time over the St Patrick's day weekend and this weekend working on my google reader replacement. Its pretty much there. The code is at github and there is a okay readme there too. You can visit reader to try it out.

comment

24 Mar 2013 » logging out

At work there is a web single sign on system. And it has the feature that visiting a single page will log you out. This leads to people setting up comedy redirects. I may have been one of these people. The problem is called logout CSRF - and the folks who look after the system say its working as expected. The google app security reward program explicitly calls out logout CSRF this as "Difficult, long-term browser-level improvements are required to truly eliminate this possibility.".

CSRF is a simple attack. The user visits a malicious website. The source for that website refers to a good website in either html or in JavaScript. The user's browser runs the code and sends GET or POST requests to the good website. The good website assumes the user is telling the browser to send the requests - for example to transfer the money. This is the Confused deputy problem.

This is a pretty well understood attack now. Most sites have defences, but the logout hole lingers. Here's a example Don't click it!

I thought that this would be a fun thing to explore one weekend. I threw together a pretty simple login system in python and web.py. Quick aside: web.py is lovely. The user logs in, and gets a cookie. For every user action, the cookie is sent by the user browser, and validated. There's no server side session stuff at all. When the user logs out, the cookie is deleted. The code is on github.

Quick note on the cookie contents. The cookie contains the username, the timestamp the cookie was generated at, the cookie expiry time, and a HMAC. If a attacker changes any of the data in the cookie, the hamc is invalid. The attacker does not know the key for the HMAC, so can not generate a valid cookie.

Why store all the information client side? The main reason is dos protection.

Anyway, there are CRSF attacks in both the login and logout pages. For logout, the evil site loads the logout url in a iframe. For login, the evil site has javascript which submits the login form with a different username and password. Code for both is in git and live on another site.

The defence against CSRF is pretty well known - for every form or url, include a hidden value that the attacker can't guess. Check for the hidden value on every action. If its missing or corrupt, reject the action.

I ended up using the user's IP, a timestamp, and a expiry time and a hmac. The timestamp and expiry protect against replay attacks. The user's ip protects against a attacker scraping the login form, getting a valid token, and using it for the attack. Once again, the only server side state is the hmac key.

Clearly, all of this has to happen over https. Otherwise a passive attacker can sniff the password or the cookie, and do what he wants. https also protects against a active attacker from changing the requests or responses.

At this stage it seems things are pretty well protected - any user action, including logout, is tied to a token that attacker can't discover or fake. So why doesn't Google protect against logout csrf? The answer is in cookie bombardment and forcing. They worry about the case where there is a active man in the middle attacking the user, and the user is doing some other browsing while logged into gmail. The MitM can inject or modify http requests and responses, but not gmail's https requests. When the user requests a different site, the attacker can 302 them to the http version of gmail, then can intercept the request for gmail, then can reply with a cookie-clearing header. Volia, the user is logged out.

Cookies have grown two features over the years to protect against different attacks. secure and httpOnly. A HttpOnly cookie is only sent over the wire in a http or https request, and can;t be accessed by javascript. A secure cookie is only sent to a https url. However, a http page can set a cookie with the same name as a secure cookie, and therefore overwrite the secure cookie. This is a bug, in my opinion, and it lets a active attacker force a logout of a https only site.

The solution to this, in my mind, is HSTS. This is a header to tell the browser that all traffic for a domain is over https. This prevents the MitM from injecting a request over http, and therefore from injecting cookies. Wee.

The next attack is cookie bombardment. The MitM starts sending lots of cookies for unrelated sites. The browser's global limit on the number of cookies it can store is reached. The browser starts evicting cookies. It evicts the login cookie, logging the user out.

Its hard to tell how big a deal this is. The attacker can issue up to 20 redirects between different domains for each user http click. For each of those domains, he can set 150 cookies with 4k of data each, which is 12M of data. I haven't worked out what the limit is. My Cookies file is 700k.

Anyway, at this point we've gone past a attacker tricking the user to click on a url to a active attacker spraying cookies around. I figure the logout hole should be closed a bit more than it is.

comment

16 Mar 2013 » spdy

I started supporting spdy this morning. It as pretty simple. I downloaded the most recent version of openssl (to pick up NPN) and the ngnix source and a patch. Applied patch, built a static ngnix binary, and off we went. Only problem was fiddling with the init script refer to the new binary and the config in /etc. I also cleaned up a few bits of the ngnix config. Spdy check is happy.

Oh, can you spot why this extract from debian's /etc/init.d/ngnix might not be ideal?

set -e
  restart|force-reload)
        echo -n "Restarting $DESC: "
        start-stop-daemon --stop --quiet --pidfile \
                /var/run/$NAME.pid --exec $DAEMON || true
        sleep 1
        test_nginx_config
        start-stop-daemon --start --quiet --pidfile \
                /var/run/$NAME.pid --exec $DAEMON -- $DAEMON_OPTS || true
        echo "$NAME."

comment

24 Feb 2013 » rise of the middle management machines.

Nerdy content Over on hacker news there was a guy showing off his website that prints inspirational quotes at you. I'm sure that by now he's worth more than Bill Gates. I'm not really a fan of inspirational quotes. I wondered if it would be possible to generate such quotes at random.

Enter python. Enter a Markov chain. Enter a collection of quotes taken from the internet. Look on the following and despair.

Simplicity is art. sounds real to me.

The source.

comment

19 Feb 2013 » Ngnix

I've moved this web server over to Ngnix. Let me know if it broke something.

comment

23 Dec 2012 » Nerd updates

Nerdy content I made a few changes to the blog. I have moved the URL to www.nuttall.im, rather than http://nuttall.im/chile/. There was not anything interesting on / anyway. I should have got mod_rewrite to do the correct rewriting, but if a url is broken for you - or something is broken for you - let me know at psn _at_ nuttall.im.

I've moved to SSL and STS. The long term goal for ssl is either to set up ajaxterm or similar, or to play around with spdy. I would like ajaxterm as a backup for putty in internet cafes, and I want to play around with spdy anyway. STS protects against sslstrip, and also does a better job at flushing out mixed content warnings.

Adam Langley's blog is pretty good at explaining what to do. I got the certs off startssl - who were puzzled by an Argentinian IP and Dublin address and wanted to ask more questions. They were convinced by a Google maps photo of the roof of my block of flats. Ob Simpson's reference. I also screwed up serving the certs with Apache - and didn't send the full chain. This broke Firefox, but not chrome. Problem found by my Dad and reproduced with ssllabs test thingo.

I have found one problem with ssl - disqus broke. It fetched a resource over http. The Google suggested turning off their 2012 update. It seems to work after that.

I added the www subdomain so I could scope cookies to www.nuttall.im, not nuttall.im. I'm not doing much with cookies, so I might change that back.

comment

16 Dec 2012 » Civil rights Captcha

Warning - nerdy content I spent a lazy afternoon back in Dublin pulling apart Civil rights captcha, and I wanted to save my notes by sticking them on the blog.

Background

Civil rights captcha is a system that aims to educate people on civil rights as well as tell humans from robots.

Note that wired and therefore hacker news talk about filtering out internet idiots with this, which isn't mentioned on their site.

how secure is this?

First idea - they only have a few questions. I think each question takes a human to come up with it and review it.They can't really raise civil rights awareness with a incorrect collection of ills and they don't want to be sued for libel.

Download the page 1000 times.

for x in {0..1000}; do
  curl -s -o dataset/$x captcha.civilrightsdefenders.org
  done

Compare them to one another.

for x in dataset/*; do
  diff dataset/0 $x | egrep '>'
done | sort | uniq > questions

They have given 8 questions. Theories as to why:

They might also have many correct answers per question. With a normal captcha you only have one correct answer.

How many answers are there? Use chrome to grab a image url. Use curl to hit that url a few times. Each file has a different sha1sum (it would be nice to have a command line tool that uses a cheaper hash), so possibly a bug in the loop or a different image. Download 1000 images.

mkdir images
for x in {0..10000}; do
  curl -s -o images/$x 'http://captcha.civilrightsdefenders.org/captchaAPI/securimage_show.php?sid=xJZNm2G1mK5TQQH69mX3&newset=7&lang=en';
done

Hash all the images, see 1003 different hashes. Ideas:

Look at the images. Lots of different words, some negative, some positive. Some dupe words, but not many. Download 6k images. All of them are different.

Peer at chrome's debugger. Watch the process. The javascript fetches one image with newset=1, and two more without the newset parameter. Each request has a sid parameter set to a random string. The random string is different for each image. The newset request sets a cookie, which is sent back to the server. Example cookie:

Set-Cookie: PHPSESSID=eq0llt1rjtfr0h3fa0mlorrm67; path=/

Random string notes: its not clear what purpose the random string serves. If I had to guess, it prevents http caching.

Once the user enters a answer, it does validation with a request like so.

curl --cookie 'PHPSESSID=e66bfeidg9ukm1ovvk9cn1i8f6'
'http://captcha.civilrightsdefenders.org/captchaAPI/?callback=jQuery1&code=concerned'
result:
jQuery1({"answer":"false"});

So it presumably stores a map of session to correct answer on the server side, and returns a json blob if the user's input is correct.

Code for a session

set -eux
session_id=$RANDOM
dir=session-$session_id
mkdir $dir
random=$(printf "%06daaaaaaaaaaaaaa" $session_id)
curl -s -o $dir/1.png --dump-header $dir/1.headers
"http://captcha.civilrightsdefenders.org/captchaAPI/securimage_show.php?sid=${random}&newset=1&lang

cookie=$(awk '/Set-Cookie:/{print $2}' session/1.headers | tr -d ';')
awk '/Set-Cookie:/{print $2}' $dir/1.headers
curl --cookie "$cookie" -s -o $dir/2.png --dump-header $dir/2.headers "http://captcha.civilrightsdefenders.org/captchaAPI/securimage_show.php?sid=${ra
curl --cookie "$cookie" -s -o $dir/3.png --dump-header $dir/3.headers "http://captcha.civilrightsdefenders.org/captchaAPI/securimage_show.php?sid=${ra

echo $dir
echo 'work out the answer'
read answer

curl -s --dump-header $dir/answer.headers --cookie "$cookie" "http://captcha.civilrightsdefenders.org/captchaAPI/?callback=jQuery1&code=${answer}"

Can this be brute forced?

Start out by sending the contents of /usr/share/dict/british-english

% wc -l /usr/share/dict/british-english
99156 /usr/share/dict/british-english

It takes 30s to test 100 words. So to test british-english would take 8 hours. New plan: find a list of words for emotions on the internet. like so.

  1. ~700 words, so under 5m to test them all. Doesn't work.
  2. How about solving the captcha, verifying it, then sending 10 verify requests for random words, then trying to verify the correct answer again? Fails.
  3. How about trying a random word, then trying the correct answer? Fails.
  4. How about trying the correct answer twice? Works both times.

Even though the key space is quite small, O(100)s of words, brute forcing is hard because any false answer drops the session.

Conclusion

This is more robust than I expected. A lot of the attacks I expected to work don't work. There are fairly few questions, but the questions don't matter. There are O(100s) of text answers, but its generating a new image for each request, meaning that there isn't any point in solving the images offline (or spending time trying to use their site as a oracle for the images). It reduces down to the normal image captcha problem - OCRing images online. Its also probably vulnerable to dos attacks on opening many sessions.

Post-script: actually reading their docs shows that its based on php captcha.

comment

16 Nov 2012 » Google Authenticator

Warning More nerd content.

I found myself typing this (and the post before it) over Putty (Note for the more twitchy - Strong encryption is legal in England (where is computer is) and Chile and Argentina. The list of countries that outlaw strong encryption is pretty similar to the list of countries with records of making citizens vanish.) Putty does not understand ssh keys in whatever format openssh uses. I dare not dig deeper. Having a system where I download a ssh key onto a untrusted machine is not wise. Having a system where I enter a password into a untrusted machine also is not wise

To deal with this, I set up Google authenticator for pam. This works pretty well so far - on a trusted machine, I can continue to use ssh keys, and on a untrusted machine, I can use the same one time system as I use for email etc. I also have rescue codes in my pocket incase I lose my phone.

comment

04 Nov 2012 » nerdy notes

I'm a nerd. Here's some nerdy notes.

Cameras: I'm planning to take my point and shoot. its maybe a year old. Its grand. Now the bad news is that I can take a lot of photos - over 100 a day. So amazon sent us 10 8g sd cards. Betcha we still run out. Betcha that I look at the idea of only 80g of storage and laugh. I've also bought space on Picasa which was dead cheap. The hope there is that we'll be able to get the originals back through takeout later. Facebook is much more of a pain in this respect. We've also got some blank cds, which we'll try and back stuff up onto. The fear here is both losing or damaging a SD card, and also theft.

My Camera can upload photos direct to the internet. Making this functionallity work has been a pain in the backside. It also seems to drain the battery. We'll see if it works out there.

We'll probably want to get into email and facebook while we're out there. We'll probably do it from internet cafes with interesting collections of malware. So 2 factor and SMS auth. SMS auth is a bit more of a pain as we don't really know phone numbers out there - experience and common sense argues for buying a new sim card. I've also picked up a yubikey for lastpass. We'll probably still have problems.

I'm taking an android phone, and to reduce the impact of it being lost or stolen I've set up the android / google apps corp system for registering phones the control panel. It lets you ring phones, wipe phones remotely and tells you where they are. Its a google apps for your domain feature that actually costs money, but its pretty cheap. And it means I never lose my phone again. I also set up the normal security settings - encrypting the phone and so on. We'll see.

This site is hosted on a little virtual machine sitting in London. To get into it I depend on ssh keys. I've stuck those keys on my phone and can connect with connectbot, but using a unix text editor with a phone's keyboard may drive me insane. We'll see how much blogging we do ;-). I have thought about putting putty and the keys on a USB stick and plugging it into internet cafes - we'll see.

I also have a kindle. I like the kindle as its dead light and has a ton of books on it. Laura points out the kindle will probably be nicked. We'll see. At least I bought a waterproof bag for it at some earlier point.

comment