Making your own smart ‘machine learning’ thermostat using Arduino, AWS, HBase, Spark, Raspberry PI and XBee

Previous part:
3. Sending data, at 1,000 values per second, to a Raspberry PI (Python)

4. Storing data in the Amazon Cloud (HBase)

4.1 HBase as Key Value store

Initially I thought of using a relational database for permanently storing the collected sensor values, mainly because I thought it would be easy. When the Raspberry PI and the cloud server run the same database system and are connected through a database link updating would be as simple as ‘insert into select * from ‘. Using SQLite made this imposable because it does not support remote tables. In addition storing 1,000 values per second would quickly increase the number of rows (over 31 billion for one year), possibly going beyond the limits of a Relational Database System (RDMBS). Finally, because this was mostly a learning project and I had some experience with big data technology, such as Hadoop and HBase, I decided to go for a Key Value store.

I decided to use HBase as storage system. Mainly because I had some experience with using HBase in combination with Python and secondly because its the underlying system of OpenTSBD. OpenTSBD is an open source project for storing time series data. This is almost the essence of what I was trying to achieve. I decided to use standard HBase instead of OpenTSBD to be able to understand more what was happening in the storage layer and to have more control of the row key design.

4.2 HBase row key design

One of the most import parts of using HBase is the row key design. Lars George explains this in detail in his book HBase: The Definitive Guide. I chose this one:

<device id>_<port id>_<reverse epoch>_<reverse milliseconds>

For example: 40b5af01_rx000A01_8571346790_9184822.

The key design uses the concepts device and port. Both are XBee concepts. A simple combined <sensor id>, similar to OpenTSBD, might be a better alternative because it is a more general approach. On the other hand splitting the key up in the id of the device that sends the value and the id of the sensor on that device is a bad concept. In this case port id can be seen as sensor id. Both device id and port id are fixed length alphanumeric id’s. The time parts of the key are reversed to use scan function of HBase to quickly return the latest value of each sensor. The sensor table in HBase only uses one column and one column family.

4.3 Uploading data to HBase for the Rapsberry PI

The Raspberry PI inserts the values directly in the HBase table. Every second it looks up the latest timestamp for each sensor in the HBase table and inserts every value from the SQLite table on the PI that has a higher timestamp. If for any reason the internet connection between the Raspberry PI and the cloud server breaks, all recorded values on the the Raspberry PI will automatically be inserted when the connection is reestablished. Only the values that have not been removed by the ‘delete’ thread on the Raspberry PI will be uploaded. Currently the connection can be down for 2 minutes without data loss.

The connection type is a ‘Thrift’ connection. Thrift has the advantage of being a binary protocol, reducing the overhead to for example REST and there is a nice Python wrapper for HBase Thrift. Values are sent in batches with a maximum of 5,000 values per batch to limit memory overhead on the Raspberry PI. For security the Thrift connection runs through a ssh tunnel. Autossh sets-up the connection between the Raspberry PI and the cloud server. The Python script runs as daemon, following the excellent instructions by Stephen Phillips http://blog.scphillips.com/posts/2013/07/getting-a-python-script-to-run-in-the-background-as-a-service-on-boot/ .

autossh: ssh tunnel for thrift


autossh -M 0 -q -f -N -o "ServerAliveInterval 60" -o "ServerAliveCountMax 3" -L 9090:localhost:9090 -i ~/.ssh/<keyfile> ubuntu@<server address>.amazonaws.com

Upload thread


import happybase
def xinsert():
"""
Receives and sends data from and to the XBee.
Sending and receiving combined in one thread because only one tread can use GPIO port.
Recieving is done as fast as possible.
Sinding only at 1 second interval.
Runs as a sperate thread.
"""
ficurtime = 0
fivepoch = 0
fiprevepcoh = 0
ficursource = '40b5af01' #device id of raspberry pi only used for logging
DEST_ADDR_LONG = "\x00\x13\xA2\x00@\xB5\xAF\x00" #destination adress currently fixed value
while True:
logging.debug("insert started")
logging.debug(datetime.datetime.utcnow())
try:
#start with receive part
ficurtime = time.time()
fivepoch = int(ficurtime)
fimintime = fivepoch 5
response = xbee.wait_read_frame()
logging.debug(response)
vid = (response['id'])
if vid != 'tx_status':
vsource = (codecs.decode(codecs.encode(response['source_addr_long'],'hex'),'utf-8')[8:])
logging.debug(vsource)
logging.debug(vid)
vdata = codecs.decode(response['rf_data'],'utf-8')
logging.debug(vdata)
if vid == 'rx': #special case if port is rx. Assumes arduino is sending data.
vid = 'rx'+(vdata[0:3].zfill(6)) #first part of payload is sendor ID
vdata = vdata[4:]
vdatas = vdata.split(',') #assumes array of values
for vdatasv in vdatas:
vdatasv = int(vdatasv)
fins(time.time(),vsource,vid,vdatasv) #use fins() function to actuall insert data in database
else: #case of normal xbee payload
vid = vid.zfill(8)
vdata = int(vdata)
fins(time.time(),vsource,vid,vdata) #use fins() function to actuall insert data in database
#send at 1 second interval
if fivepoch > fiprevepcoh:
fiprevepcoh = fivepoch
dbd11 = dbc.cursor()
dbd11.execute("SELECT vsendmes from sendmes where vepoch > %i order by vepoch desc,vsub desc LIMIT 1" % fimintime)
rows = dbd11.fetchall()
for row in rows:
fipayload = row[0]
xbee.tx(dest_addr='\x00\x00',dest_addr_long=DEST_ADDR_LONG,data=str(fipayload),frame_id='\x01') #send trought XBee
fins(ficurtime,ficursource,'rx000B04',int('9'+str(fipayload))) #logging of send message in hsensvals table
dbd11.close()
except Exception:
logging.exception("fxinsert")
time.sleep(0.001)

4.4 Messaging alternative

An alternative of course would have been to use a messaging system. Messaging systems however much harder to develop and particularly more difficult to debug. At least for me they are.

4.5 Installing HBase on Amazon server

The Amazon server I use is a m1.large instance running Ubunto. This instance has 8 gigabyte of memory, which is required to run HBase and 800 gigabytes of storage. This storage makes it possible to save 10 yeas of raw data (compressed). I use spot pricing to reduce costs and so far I have not paid more than 20 USD per month for this server. This does of course show, that this is mainly an a research project. To save money on your energy bill you would at least have to make up for the monthly server fee.

In my installation of HBase the data is stored on the Hadoop file system HDFS, which is very common. For the installation of Hadoop I followed the excellent tutorial of Aravindu Sandela for installing Hadoop 2.0 on Ubuntu http://bigdatahandler.com/hadoop-hdfs/installing-single-node-hadoop-2-2-0-on-ubuntu/ . For the installation of Hbase I followed the tutorial on the Apache HBase website for running HBase in pseudo distributed mode. Of course both HBase and Hadoop are designed to run on a cluster of servers, so running it on a single machine is a bit useless, but for this project it works very well.

4.6 LZO compression in HBase

Storing the raw sensor value at the intended collection rate of 1,000 values per second the 800 gigabytes would have only given me one year of sensor values. This is more than sufficient for this project. The OpenTSBD project however recommended using LZO compression on the HBase tables. In addition, I use the FAST_DIFF data_block_encoding option. This HBASE feature only stores the part of the row key that is different for the previous one. Because my row key design uses a lot of redundant information (both device id and sensor id) this greatly reduces the amount of required storage. Both measures reduced the storage with 90%, to 10% of the original required storage. This makes it possible to save 10 years of raw sensor information on a single server.


create 'hsensvals', {NAME => 'fd', DATA_BLOCK_ENCODING => 'FAST_DIFF', COMPRESSION => 'LZO'}

Next part:
5. Turning the boiler on and off at the right time (Arduino)

Advertisement

2 thoughts on “Making your own smart ‘machine learning’ thermostat using Arduino, AWS, HBase, Spark, Raspberry PI and XBee

  1. Pingback: Enabling technologies: how to build your own NEST | SmartDomus

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s