Make use of Octopus Power-ups using a Solis Hybrid inverter.

Octopus energy recently announced they would be offering Power-ups; short periods of free electricity to customers in certain areas.

As one of the lucky people living in a Power-ups area, I wanted to quickly put together some basic notes on how to configure a Solis Hybrid Inverter to charge from the grid during these times. See https://octopus.energy/power-ups/ to find out more about the scheme.

Note: This is all just my personal thoughts. I’m not an expert and information may be inaccurate so listen to me at your own risk. If you are not comfortable changing these settings – don’t. I’m fairly sure you could break your install or worse by incorrectly changing settings within these menus.

While everything mentioned here can be managed directly on the Inverter, the steps below our done using the remote management tool on Solis Cloud. If you have not already, you will need to manually request access to this at https://solis-service.solisinverters.com/en/support/solutions/articles/44002373796-inverter-remote-control-application

Once approved, you will likely need to log in and out again to see the new buttons.

Setting the Inverter to charge from grid during the Power-up.

To start, enter the management console. From the plant overview screen select your plant (there should normally just be the one).

In the left hand menu you should then see a tab labelled “Device”, click this and you should see a list of inverts. Select the inverter serial number to view the Inverter itself

Then in the top right you should see the following buttons. Select “Inverter control”.

At this point you will need to re-enter your password and agree to the warning.

Now you should see a Control panel similar to the following.

To start select Work Mode, then select Self-consumption from the options avaiable.

You should then see a screen that looks like the below.

First, confirm self-use mode switch is on by clicking into Self-Use Mode Switch and confirming current value is “enable”.

Next enter Charge and Discharge.

Here I set charging current1 to 50Amp (the default), leaving discharge as 0 (as I don’t want to export). I also set the charging time as the period for the upcoming “Power up”. In my case 2pm till 4pm.

Unfortunately you can’t schedule this in advance so you’ll have to set this on the day, then turn it back off again afterwards or you risk it charging the battery on days when grid power is noticeably less free.

Now that is done, you can click in to Time of use switch and enable it. This pretty much just tells the inverter listen to the charging/discharging times you selected previously. These are ignored once this is turned back off.

Finally, select Allow Grid Charging, and enable it.

And then it is just a case of waiting for the “Power up” to start.

Assuming all is well your battery should begin to charge at around 2.5kW. This will first use any solar generation you have (as this goes direct to the battery), and then top’s up the rest from either spare power capacity in your home or the grid.

That said….

Please do remember to disable the “Time of use switch” and potentially “Allow grid charging” options after the power up is complete. If you forget you will end up paying for the grid import the next day when the defined charging slot comes around again.

A few more notes.

  • Everything described here was tested on my own solar “plant”, a 3.4kW Solis Hybrid Inverter with 2 Us3000c PylonTech batteries.
  • My home setup was previously in its default “self use” mode state after it’s installation.

I did find two particularly interesting things that were not originally clear to me when setting this up.

  1. While in self use mode, even with allow grid charging on – your system will not charge from the grid unless its during a charging period (configured in the charge and discharge settings).
  2. Better still – the charge from grid settings naming is slightly misleading. What it actually appears to mean is “Allow inverter to charge battery from an AC source”.

Why is this interesting?

Well, if you happen to have a second solar setup in your home (like I do from an older install), enabling this will actually allow your battery to charge using the spare AC capacity generated by the system, while not touching anything from the Grid itself. I’m fairly sure this only works due to my hybrid inverters CT clamp being on the main power from the house, allowing it to detect if there is spare capacity that’s currently being exported. I assume this is the standard approach, but best to confirm if you are unsure.

Its worth noting that the same configuration options could also be used to set a non “Power up” based charging schedule (given that is what they are actually intended for), for example if you get cheap power over night or similar. Given I don’t get that, the “Power up” was my first time touching any of these options. As such, I thought someone out there may find it useful to have this all written down, given the manual wasn’t quite as clear as I would have preferred

Thanks for reading.

That’s pretty much all there is to it.

Let me know if this was helpful or if I missed anything that’d be good to know.

Larahook: Hooks for Laravel

In SaaS or similar applications, there often comes a point when you want to start adding custom or tweaked functionality for clients without the codebase itself diverging. One of the most common solutions to this is to use hooks.

Hooks effectively allow you to make certain parts of your applications functionality open to modification by another part. My own usage of this for instance is allowing a single white label configuration file to adjust (in some cases quite deeply) how the larger application works. WordPress is probably the most well known for using this approach in order to allow its plugins and themes to easily interact with WordPress’s functionalty, without needing to make any changes WordPress’s own code.

To support this functionally in a Laravel application, for the last few years I have been using the wonderful esemve/Hook library by Bence Kádár. Unfortunately the library now appears to be mostly inactive – most critically lacking support for Laravel 8 (as well as PHP8 itself in some areas).

As such, with an growing requirement for additional functionality and updates, I decided to take the plunge and create a new maintained fork of the library.

Github profile for the coinvestor/larahook library.

Although for simple use cases coinvestor/larahook can be used as a drop in replacement for the original esemve/hook, I utilised the opportunity of it being a clean break to make some more involved changes to the libraries functionality.

The most important changes are listed below.

Laravel 8 and PHP 8 compatibility plus auto-discovery support.

The library has been updated to work with the latest version of Laraval, as well as to make use of the newer package auto-discovery features meaning you will no longer need to update your app.php directly.

Retired the initial content parameter, and replaced it with $useCallbackAsFirstListener.

In the original version of the library, you needed to specify both a call-back (to run when the hook was not being listened to) as well as optionally a default $output value to be passed in as the 4th parameter to the get hook method. This lead to some confusing code where a hook would need to invoke the original call-back directly to get a default value (where output was null), but if a second hook had run previously, may instead include data in the $output the hook would need to be aware of.

To simplify this I changed the 4th parameter on get to instead be a Boolean called $useCallbackAsFirstListener, as such by setting this as true, the hooks default callback will always run and pass its value in as the $output value for every subsequent listener. As such listener logic can be simplified to always expect to be working with the value of output. For now this option is left as false, as in the case where the default hook is taking an action (say sending an email) this behaviour would not be desired, and so for safety must be specifically enabled.

Listeners at the same hook at the same priority will no longer overwrite each other.

Unlike the original version of the library, lara-hook will allow multiple hooks to be registered on the same event at the same priority level. In the case this happens the hook listeners will be run in the order they are registered.

If you are making use of the original functionality where hooks at the same priority overwrite each other, code changes will be needed.

Support for falsely return values.

In our application there were a number of cases where we’d wanted a hook listener to return a falsey value back to an underlying function. This was previously not possible, as a hook returning false would trigger an abort – causing the hook to return the default value (rather than the falsey value itself).

This is no longer the case in larahook, meaning the Hook:stop(); method will now have to be used directly where a hook does need to abort, as 0/false/nulls will simply be returned from the hook like any other value.

New Methods: getListeners, removeListener and removeListeners.

Listeners can now be unregistered, both individually as well as all listeners for a specific hook.

Additionally a full list of listeners on a hook can be returned using the getListeners method.

Test and bugfixes.

In addition to the functionality changes, I also spent some time adding unit tests and basic CI for the library. As is always the case when adding tests I managed to find and fix a number of minor bugs and edge cases across the library.

If you’d like to swap over to the new library, please have a look at our repo at https://github.com/CoInvestor/larahook/

Github actions: Run CI when specific reviewer or label is attached

Given the challenge I had in my own recent googling, I thought it would be worth while putting together this quick blog post to provide a simple/direct answer as to how to configure a GitHub Action on a pull request, so that CI tasks are only run when a certain reviewer has been added.

Quick background

  • We had a large application with a significant test suite.
  • We try to create our pull requests early in the development of a feature to aid visibility.
  • We need all pull requests to require a successful run of the test suite before they can be merged.
  • Finally: We didn’t want to keep run the test suite over and over unnecessarily (for example every time the branch is pushed to while being developed).

As such our preferred mechanism was for the test suite to only run when a specific reviewer is attached to the pull request. In our case the reviewer is our service account.

Solution

Lets assume our existing Github action looked something like the below. The Github action is set to run whenever a new PR is opened, a new reviewer is added or when new commits are pushed in to the pull request itself.

name: ci

on:
  pull_request:
    types: [ opened, review_requested, synchronize ]

jobs:
  run_our_ci:
    runs-on: ubuntu-latest
    steps:
    - name: Run our CI
      run: |
        echo "Hello world!"

Currently this will be run every time one of those actions takes place, which isn’t something we want. The basic solution is to add a condition to the task, such as the below.

if: contains(github.event.pull_request.requested_reviewers.*.login, '<CI_SERVICE_ACCOUNT>')

The above will cause triggered actions to skip whenever the condition is not met. The condition in this case being that one of the attached reviewers has a login name matching <CI_SERVICE_ACCOUNT>.

The same can be achieved with a label instead using something along the lines of

if: contains(github.event.pull_request.labels.*.name, '<LABEL_NAME>')

With the above condition added to your task, your YAML should now resemble the below

on:
  pull_request:
    types: [ opened, review_requested, synchronize ]

jobs:
  run_our_ci:
    if: contains(github.event.pull_request.requested_reviewers.*.login, '<CI_SERVICE_ACCOUNT>')
    runs-on: ubuntu-latest
    steps:
    - name: Run our CI
      run: |
        echo "Hello world!"

If you now open a pull request with the above, you will note that each commit continues to get a “tick” next to it – but when you click in for detail the task itself will have skipped.

It is also worth being aware that making a base task skip will also cause any tasks that depend on it to skip as well (ie. any referencing the task as a “needs:”. This is useful in that you don’t have to add the “if” to every task, but can also be annoying given that tasks are considered to been run successfully (and skipped) rather than having not run at all by the Github protected branch feature.

This presented us with an issue, as due to the protection rules seeing “skip” as a “success” state, the branch protection rules will now happily let you merge a branch without any CI being run on it, something we certainly didn’t want.

To work around this (although not ideal), and ensure the branch protections do enforce that the full test suite has run before allowing a merge, we can add an inverse version of the job such as the below;

no_ci_has_run:
    if: "!contains(github.event.pull_request.requested_reviewers.*.login, '<CI_SERVICE_ACCOUNT>')"
    runs-on: ubuntu-latest
    steps:
    - name: "No tests have run"
      run: |
        exit 1

This task will explicitly fail whenever the main CI steps have been skipped. By adding this as a “required” check in your protected branch settings, you can therefore ensure that people can only merge the branch when the the full test suite has been run & passed on the given pull request.

This works due to the no_ci_has_run task “skipping” when the test suite is being run – which as mentioned above the branch protection feature sees as a success.

TLDR

The combined github actions YAML may look something like the below.

  • The real CI task will only run when CI_SERVICE_ACCOUNT is added as a reviewer.
  • The no_ci_has_run task is run whenever it is not, preventing the branch from being mergeable (If you use branch protection)
  • Multi step tasks will automatically skip if a previous task’s conditional fails.
name: ci

on:
  pull_request:
    types: [opened, review_requested, synchronize ]

jobs:
  run_our_ci:
    if: contains(github.event.pull_request.requested_reviewers.*.login, '<CI_SERVICE_ACCOUNT>')
    runs-on: ubuntu-latest
    steps:
    - name: Run our CI
      run: |
        echo "Hello world!"
 no_ci_has_run:
    if: "!contains(github.event.pull_request.requested_reviewers.*.login, '<CI_SERVICE_ACCOUNT>')"
    runs-on: ubuntu-latest
    steps:
    - name: "No tests have run"
      run: |
        exit 1

Replacing the ethernet port on a Reolink CCTV Camera

A quick write up on replacing the ethernet port on a “Reolink 5MP PoE” in case anyone else ends up needing to do this. My own issue stemmed from water getting inside the port and rusting through the wires in the original connector (My own fault due to how I’d set it up) – meaning the device was unfortunately pretty much dead without more drastic action.

My original plan was just snip the port off and wire as it as a normal RJ45. This turned out to be a little less straight forwards, as after cutting the end off I was greeted with 6 wires (including a white and purple) rather than the 8 I was expecting to see.

After cutting apart original port to see what was going on, I was able to determine pin 4/5 were wired to white and 7/8 were wired to purple, which after reviewing a super helpful wikipedia article finally made sense again (see the “10/100 mode B, DC on spares”).

Deciding to chance it, I rewired the port as;

  • Pin 1: White/orange stripe
  • Pin 2: Orange solid
  • Pin 3: White/green stripe
  • Pin 4: White
  • Pin 5: Empty
  • Pin 6: Green
  • Pin 7: Purple
  • Pin 8: Empty

I’m happy to report the above resulted in my camera coming back to life (despite my questionable wiring). My guess would be that pin 5 and pin 8 with white and purple respectively would also work just as well, although I’ve not tested this myself.

As with any “DIY” on electronics, make sure you know what your doing and are aware of the risks before attempting anything. There’s always a solid risk you’ll just end up bricking your device, shorting your POE switch or worse.

Hope anyone in the same predicament finds this handy – or at least doesn’t need to do quite as much head scratching before getting it all working again as I did.

By Carl on July 12th, 2021 in General

How to get your old analog AHD / TVI / BNC CCTV cameras working with Blue IRIS 5.

The aim of this blog post is to run through an easy and cost effective way to get any existing BNC / Analogue CCTV cameras you may have hooked up and working again with Blue Iris.

I put this blog post together primarily because I was unable to find a similar one that already existed. Although you could figure this all out on your own with a few hours of forum/blog trawling and a fair bit of tinkering – I suspect most people would rather save themselves an evening & avoid some of the guess work in terms of what equipment to order.

To be clear – If you’re setting up a new CCTV install, I’d strongly recommend just buying some IP cameras instead as the resolution & clarity will be much higher. The hookup has great reviews on some decent POE IP Cameras. https://www.youtube.com/watch?v=xg3krwlX4jk

On the other hand if you have a load of already wired up and connected old school CCTV cameras that you’d like to bring back to life, either for convenience, or just because. Read on.

What I’m working with

  • I have a blue Iris 5 setup on a windows PC, already connected up with a few IP Cameras.
  • I have 4 “swpro-735cam” cameras mounted around the outside of the house, relics from an old installation before I moved here.
  • The BNC/Power cables for the cameras are all sat in my hall and easily accessable.

My Goal.

Get the existing 4 Swann cameras working again and have them visible in Blue Iris without spending too much money.

My plan of attack to solve this is to use a cheap DVR, and get that to stream the footage straight up to Blue Iris. Although specific RTSP/BNC video encoders etc exist, all the ones I came across were massively more expensive than a cheap DVR.

The DVR.

In my case I bought a “ANNKE CCTV DVR” (this one to be exact). So far as I can tell most other models appear to be almost identical – at least on the hardware side – so my choice of going with Annke was mostly just because I recognised the brand name.

The exact model I ended up with was a  DN41R which set me back around £40.

The setup was fairly straight forward. I attached a temporary VGA monitor, the mouse included in the box then power it up and connected the 4 BNC cables to the back (ensuring the cameras were all powered). You will have to run through a quick setup wizard, involving setting up some passwords and secret questions for the DVR. After that you should hopefully see your cameras on the screen assuming they all work.

If it asks, its also worth telling it you don’t want to run the wizard on startup, as the camera streams appear to hang when this is open & I don’t plan on leaving a screen attached.

It is also worth just pointing out, you don’t need a harddrive in the DVR for any of this to work, so mine remains empty.

Getting the DVR streams working.

The next step is to get the DVR outputting some the RTSP streams that we want to feed in to Blue IRIS.

  1. First off, I connected the Annke DVR to my network with the ethernet cable.
  2. Step 2 was to open up my router control panel & grab the DVR’s IP. (This was pretty painless as it supports DHCP).
  3. Browsing to the IP, you can then login to the web control panel – which is surprisingly nice looking despite the cameras not working in it (I wasn’t willing to install the plugin, but you don’t need them to set this up).

As an optional extra, I also added the DVR to my firewall list along with my other IP cameras to block it accessing the internet directly.

With luck, you should now see something along the lines of the above. If you want to have a quick test of the streams, you can actually access them straight away via RTSP using your admin credentials – although I’d recommend setting up a “Media user” and connecting with that instead for the proper set up.

For now you can poke the below in to VLC (via Open network stream) and check it works.

rtsp://username:password@{dvr-ip-address}/h264/ch1/main/av_stream

To add the Media User, head in to the configuration tab, open up Network, Advanced settings and on the final tab on the right “Integration protocols”.

On this page you can create your media user (with access level media user), then tick the enable ONVIF button. After saving this the DVR will want a reboot. Not 100% what enabling ONVIF actually does apart from opening up another rtsp stream at :

rtsp://username:password@{dvr-ip-address}/Streaming/Channels/101

Although this is the stream I’ve chosen to use in my Blue IRIS setup. While you are in here it may also be worth turning off platform access & https if you don’t intend to use them.

Before we start adding the streams to Blue IRIS it is also probably worth tweaking the configuration of the camera’s themselves a little.

On mine (As seen above) I enabled WD1 (which is the 960H resolution), selected to only stream Video rather than audio as well. I also lowered my frame rate to 15 (optional) and checked h.264+ on.

At this point I’d suggest using the “Copy to” button to set all the camera’s up the same way, then hit save. It may want to reboot again after this.

Available Streams.

The DVR appears to support two differnt RTSP stream urls, although the feeds coming from both appear to be much the same.

rtsp://username:password@{dvripaddress}:554/h264/ch1/main/av_stream

Breaks down in to /{codec}/ch{camera-id}/{stream}/av_stream

  • Format (so far as i can tell it ignores this)
  • Channel (ch1 – ch4 or more if you brought a DVR that supports more cameras)
  • Stream (main, sub)

See the official support article for more info https://help.annke.com/hc/en-us/articles/360000252622-How-to-view-the-camera-on-VLC-player-by-RTSP- 

rtsp://username:password@{dvripaddress}:554/Streaming/Channels/101

Which breaks down in to /Streaming/Channels/{camera-id}0{stream-id}

Ie. the sub stream for camera 4 would be /Streaming/Channels/403

Blue IRIS.

After all that, we can finally start adding the cameras to Blue IRIS itself, which is fortunately really easy.

  • Open the new camera dialog.
  • Add the IP and the media users username & password in the boxes then hit “Find/inspect…” to check it’s happy. It should select the Generic/Onvif setup for you.
  • In main stream, add the RTSP path to the camera you want to add, ie. “/Streaming/Channels/101” 
  • Untick “Send RTSP keep-alives” (See troubleshooting)
  • Then hit okay and you should have your camera working! 

While your here its also likely worth setting the Direct to disk & hardware encoder options if you have these available.

You should now just run through the above steps for each camera you want to add (just incrementing the camera-id number as you go).

Success

With a bit of luck, you should now be done and happily viewing your old BNC connected cameras in Blue IRIS (Even if the quality is sadly pretty potato compared with newer IP cams).

If you are still having issues with the camera feeds, I’ve documented a few of the key problems I encountered getting everything running smoothly. The above instructions with luck will have prevented you ending up with any of them, but on the off chance they haven’t, I’ll detail them below as well as what ended up solving them for me. I suspect of the time spent setting this all up, 90% went in to debugging the below “fun”.

The Cameras keep dropping out & resetting?

Initially i thought this was due to the DVR being under-powered and wasted a ton of time trying different media encoding, frame rates and settings. 

As it turns out for some reason the DVR really does not like RTSP keep-alives. Having this enabled resulted in the cameras stopping after 5 or so seconds, then getting rebooted by watchdog. Since I turned this off they have been running fine.

Error: 80002745 (Socket error: 10053) 0

This is a weird one and only seemed to pop up when I was using the none H264+ encoding. I originally thought it was a Blue IRIS issue with the stream (as VLC worked fine on my desktop). Turns out some odd authentication hi-jinx were at play – as I eventually noticed VLC also didn’t work if I attempted to open the stream on the PC running Blue IRIS. 

Oddly it started working again as soon as I logged in to the web admin panel for the DVR from that PC – although often dropped out again not long after.

After disabling the keep alives, switching to h264+ I’ve not seen this happen. Although while attempting to work this out I did end up making a number of other changes. I don’t think any of these were what solved the issue, but on the offchance one of them was, the other things i attempted were;

  • Disabling timeout on login (done from within the DVR’s local UI)
  • Upgrading the DVR firmware.

If you need to upgrade your firmware, you can grab it from here: https://help.annke.com/hc/en-us/articles/900000011006-Firmware-upgrade (Just load it on to a USB, then use the built in upgrade function under Maintenance on the local UI – the web version appeared broken for me).

If your DVR is the same as mine you will need to upgrade to the 20190401 version, rather than the latest 20190505 – as this version is apparently too large for the DVR and thus won’t allow itself to be installed.

The DVR camera page, the 960×576 isn’t an option.

Its possible that your camera doesn’t support it, but in a lot of cases it may just be that the UI hasn’t correctly refreshed the options in the drop down. I found that by swapping to “camera 2” in the drop down at the top, then back to camera 1, the correct values got populated.

Also ensure that W10 is enabled and has been saved + you have refreshed the page after doing so.