Tuesday, October 24, 2017

SolarQuant gets a push forward from University of Auckland

One of the exciting developments at Greenstage Power has been some collaboration with University of Auckland on our experimental machine learning module called SolarQuant. This is the stand alone app server that takes consumption or generation data and aims to learn how that energy flow happened, based on the context of when it happened and what the environmental conditions were at the time. We had a visitor to New Zealand from MIT Engineering named Paige Studer, and she was instrumental at giving SolarQuant a push forward. We interviewed Paige below on the project:


Tell us a little about what you worked on at UofA
During my time at the University of Auckland, I had the privilege to work on SolarQuant, which is a program that is working to accurately predict a building’s energy consumption given a set of inputs such as time, weather, temperature, etc. When I arrived at the UofA, the current state was that SolarQuant could take inputs and a building’s energy consumption to find weights for each of the inputs. Then given only the inputs and found weights, it would map how closely the calculated energy consumption matched the actually energy consumption. The next step was to see if we could get similar results by taking predicted inputs, getting a calculated energy consumption, and compare it with the actual inputs with actual consumption.
One of the main factors in being able to do this was formatting the predicted weather so that it looked the same as the actual weather, with the exception of a type id showing that it was predicted and not actual. The predicted weather was taken from Norwegian weather, in the form of an XML file. The program would go through the file and find entries that had all of the information that we needed and added them to an initial array. This array with the predicted weather data had problems, such as not being sorted, having repeating information, etc. This initial array needed to be cleaned up and adjusted to make it look like real data. A second array was constructed so that the time of each prediction was in chronological order and separated by thirty minutes, without any repeating or missing times. Once this was completed, the program would go through that array, create weather datum objects and place those objects into a database to be used in the future.
Because the future weather in the database looked the same as actual weather in the database, we could use it on the SolarQuant platform. From here the program takes the future weather data, downloads it, and instead of training it on energy consumption, it skips straight to the questioning stage since it is the future there is no energy consumption to train off of. After this, John was going to add his code and we would hopefully see predicted energy consumption and eventually compare this with actual energy consumption for the same time period.
Do you think it will work?
Yes, of course I think it will work! Theoretically it will, so if it doesn’t right away it would be due to some bugs in the code that can be fixed. I’m very excited to see where it goes in the future once it is working, because there are some pretty cool applications. One in particular that I find to be interesting is if we can accurately predict the weather and a building’s energy consumption, given a solar/battery system, you could potentially become smarter at when to charge and discharge your battery.
As a developer what are the challenges SolarQuant is going to have – what should we get ready for?
I think that SolarQuant will only be getting better and faster, and that it will be important to stay flexible and be able to adjust with the program. For instance, one thing that John and I had talked about was possibly using a different weather source for predicted weather and how to handle it. Do you make one function that can handle all different weather sources, make a function for each weather source, etc? Being open to change in the code and sources in the future will make a difference in how well SolarQuant will continue to progress. I think one idea that John reiterated that was helpful is we want to walk before we run, meaning let’s make small additions/changes and make sure that works before progressing. We don’t want to write all this code and have it not work without us knowing why.
Did you like NZ? We heard you went bungy jumping!?
New Zealand was absolutely awesome! I loved meeting new people, learning about the Maori culture, and especially loved the adventure atmosphere of New Zealand. On the weekends I was able to go on lots of side trips, my favorite being Queenstown where I did the Kawarau Bridge bungy jump, Waitomo black water rafting, and sand boarding while I was visiting the Bay of Islands.
What are the next plans, where to?
I will begin working at a solar energy company in Southern California that specializes in getting schools solar energy, often in the form of carports. I will be an Assistant Project Engineer there, and I hope to learn more about solar energy projects, continue to grow my skillsets, and make a positive impact on the community.

SolarQuant comparing predicted consumption (blue) with actual (orange) consumption after training on 1 year of data

Some of the results from Paige's work coming in are shown above in a screenshot of the SolarQuant interface. The trained network predicted a time series in light blue here, and the actual power consumption is shown in orange. Thanks once again to Dr. Nirmal Nair at University of Auckland ECE who made this possible! And developers can checkout SolarQuant as it progresses here: git@github.com:SolarNetwork/solarquant.git

Sunday, May 28, 2017

FreeBSD with Poudriere on ZFS with custom compiler toolchain

The SolarNetwork main infrastructure has always run on FreeBSD. FreeBSD is great for allowing packages to be built with options suited to how you want to use it, by building packages from source via the ports tree. FreeBSD has evolved over the years since SolarNetwork started to distributing binary packages via the pkg tool. That can save a lot of time, not having to compile all the software used from source, but doesn't work if some package needs a different set of compiled-in options than provided by FreeBSD itself. Additionally, I'd been compiling the packages using a specific version of Clang/LLVM rather than the one used by FreeBSD (originally because one package wouldn't compile without a newer compiler version than used by FreeBSD).

Fast forward to now, and FreeBSD has a tool called poudriere, which can compile a set of packages with exactly the options needed and publish them as a FreeBSD package repository, from which any FreeBSD machine can then use to download the binary packages from and install them via pkg. It's a bit like starting your own Linux distro, picking just the software and compile options you need and distributing them as pre-built binary packages.

Finally I took the time to set up a FreeBSD build machine running poudriere (in a virtual machine) and can much more easily perform updates on the SolarNetwork infrastructure. There was just one major stumbling block along the way: I didn't know how to get pourdriere to use the specific version of Clang I needed. There is plenty of information online about setting up poudriere, but I wasn't able to find information online about getting it to use a custom compiler toolchain. After some trial and error, here's how I finally ended up accomplishing it:

Create toolchain package repository

Poudriere works with FreeBSD jails to manage package repositories. Each package distribution uses its own jail with its own configuration such as what compiler options to use and which packages to compile. The first task is to create a package repository with the toolchain packages needed, in my case this is provided by the devel/llvm39 package. This toolchain repository can then be installed in other poudriere build jails to serve as their compiler.

Once poudriere was installed and configured properly, the steps looked like this:

# Create jail
poudriere jail -c -j toolchain_103x64 -v 10.3-RELEASE
mkdir /usr/local/etc/poudriere.d/toolchain_103x64-options

# Create port list (for this jail, just the toolchain needed, devel/llvm39)
echo 'devel/llvm39' >/usr/local/etc/poudriere.d/toolchain-port-list

# Update to latest (each time build)
poudriere jail -u -j toolchain_103x64
poudriere ports -u -p HEAD

# Configure options
poudriere options -j toolchain_103x64 -p HEAD \
    -f /usr/local/etc/poudriere.d/toolchain-port-list

# Build packages
poudriere bulk -j toolchain_103x64 -p HEAD \
    -f /usr/local/etc/poudriere.d/toolchain-port-list

After quite some time (llvm takes a terribly long time to compile!) the toolchain packages were built and I had nginx configured to serve them up via HTTP.

Create target system package repository

Now it was time to build the packages for a specific target system. In this case I am using the example of building a Postgres 9.6 based database server system, but the steps are the same for any system.

First, I created the system's poudriere jail:

# Create jail
poudriere jail -c -j postgres96_103x64 -v 10.3-RELEASE

# Create port list for packages needed
echo 'databases/postgresql96-server' \
    >/usr/local/etc/poudriere.d/postgres96-port-list

echo 'databases/postgresql96-contrib' \
    >>/usr/local/etc/poudriere.d/postgres96-port-list

echo 'databases/postgresql-plv8js' \
    >>/usr/local/etc/poudriere.d/postgres96-port-list

# Configure options
poudriere options -j postgres96 -p HEAD \
    -f /usr/local/etc/poudriere.d/postgres96-port-list

Second, install the llvm39 toolchain, using the custom toolchain repository:

# chroot into the build jail
chroot /usr/local/poudriere/jails/postgres96_103x64

# enable dns resolution for the build server (if DNS names to be used)
echo 'nameserver 192.168.1.1' > /etc/resolv.conf

# Copy /usr/local/etc/ssl/certs/poudriere.cert from HOST
# to /usr/local/etc/ssl/certs/poudriere.cert in JAIL
mkdir -p /usr/local/etc/ssl/certs
# manually copy poudriere.cert here

Then I configured pkg to use the toolchain repository via a /usr/local/etc/pkg/repos/poudriere.conf file:

poudriere: {
    url: "http://poudriere/packages/toolchain_103x64-HEAD/",
    mirror_type: "http",
    signature_type: "pubkey",
    pubkey: "/usr/local/etc/ssl/certs/poudriere.cert",
    enabled: yes,
    priority: 100
}

The URL in this configuration resolves to the directory where poudriere build the packages, served by nginx. Next I installed the toolchain, explicitly telling pkg to use this repository:

pkg update
pkg install -r poudriere llvm39

# clean up and exit the chroot
rm /etc/resolv.conf
exit

Now I can configure poudriere to use the toolchain by creating a /usr/local/etc/poudriere.d/postgres96_103x64-make.conf file with content like this:

# Use clang
CC=clang39
CXX=clang++39
CPP=clang-cpp39

DEFAULT_VERSIONS+=pgsql=9.6 ssl=openssl

The next step is what took me the longest to figure out, probably because I had not studied how poudriere works with ZFS very carefully. It turns out poudriere makes a snapshot of the jail named clean, and then clones that snapshot each time it performs a build. So all I needed to do was re-create that snapshot:
 
# Recreate snapshot for build
zfs destroy zpoud/poudriere/jails/postgres96_103x64@clean
zfs snapshot zpoud/poudriere/jails/postgres96_103x64@clean

Finally, the build can begin normally, and the custom toolchain will be used:

# Build packages
poudriere bulk -j postgres96_103x64 -p HEAD \
    -f /usr/local/etc/poudriere.d/postgres96-port-list

Update target system to use poudriere repository

Once the system's build is complete, it is possible to configure pkg on that system to use the toolchain repository via a /usr/local/etc/pkg/repos/poudriere.conf file:

poudriere: {
    url: "http://poudriere/packages/postgres96_103x64-HEAD/",
    mirror_type: "http",
    signature_type: "pubkey",
    pubkey: "/usr/local/etc/ssl/certs/poudriere.cert",
    enabled: yes,
    priority: 100
}

Then I copied the certificate from the build host to the file as configured above. I no longer want to use the default FreeBSD packages on this system, so I created a /usr/local/etc/pkg/repos/freebsd.conf file to disable it, with the following content:

FreeBSD: {
    enabled: no
}

Done! Now, after running pkg update, all packages will install from the poudriere repository, and I no longer need to compile the software on the system itself.

Friday, May 26, 2017

VivaTech 2017

This year Greenstage will be at VivaTech in France from June 15th to 17th.  We will be sharing our distributed energy solutions with the world and showing off our latest and greatest R&D.

Viva Technology
If you are planning on coming along, check us out in the VivaTech Vinci Energy Lab.

See you there!