Building a system on Google Cloud Platform to support IoT services
Kyocera Communication Systems Corporation (hereinafter KCCS) provide their energy-focused IoT PaaS services via the IoT Cloud Platform built on Google Cloud Platform (hereinafter GCP). We asked Tetsushi Otake (left), the Social Systems, Solutions and Service Business Energy Cloud director, and Hiroshi Tachiwaki, Development Division Chief for the same services (right) to give an overview of the IoT Cloud Platform and salient points regarding their choice of the Google Cloud Platform as their infrastructure base.
Tell us about the IoT Cloud Platform
The Social Systems, Solutions and Service Business to which we belong combines engineering expertise and ICT technology to provide energy management solutions and disaster prevention solutions that aid in the creation of vibrant regions and safe and secure towns.
The IoT Cloud Platform is a PaaS locator for IoT services. As a first target and energy management goal, it receives a large volume of data sent daily from various types of sensors and devices such as storage batteries, solar panels, air conditioning units, etc. and saves it in an integrated database. Information from the collected data , such as incident detection and prediction/analysis, data visualisation, etc. is processed, and retrieved from an externally linked server in the form required.
For example, you can expect it to be used for such things as visualized room temperature data being fed back to an air conditioning control system, or independent electricity source monitoring for the purpose of disaster prevention that combines solar panels and storage batteries.
Why did you build IoT Cloud Platform on GCP?
We had thought about building the IoT Cloud Platform on the cloud and spent about a year testing it out.
Then, when we took part in the Cloud Ace GCP seminar and tried out GCP in the hands-on training session, we felt it was user-friendly and fit for our purposes.
With other companies clouds, we just used our infrastructure on their cloud systems, and we felt that this didn’t save on the amount of time and effort required to create the infrastructure. Whereas with GCP-provided services, such as Google App Engine or Big Query, you didn’t notice the infrastructure side. For this reason, as applications engineers, we felt that not having to think too much about the infrastructure made GCP very user-friendly.
Also, we recognized the need to automize in order to bring infrastructure operating costs down to as close to zero as possible when building the IoT Cloud Platform. Even then, the REST ful API was completed and from the applications engineer’s perspective, it was very easy to use.
From a slightly longer-term perspective, we felt that Google’s artificial intelligence and machine learning tools were quite a bit ahead of other companies. We are seriously considering testing out Google’s machine learning library, TensorFlow, and the recently enhanced Cloud Vision API or Natural Language API, etc. and putting them to use. On that point, we’ll continue to gather data from various types of sensor devices using other companies’ cloud services, but the reason for choosing GCP for our IoT Cloud Platform was that we felt Google offers better future prospects for exploiting that data.
Google Cloud Platform supports IoT analysis of the energy sector
Tell us about the building side of things
When we used GCP to build the IoT Cloud Platform, we realized that it would be more of an application platform than an infrastructure platform. On a practical level, services within GCP are used for different things: the functions-based core Google Compute Engine (hereinafter GCE) for the build, and the application-based Big Query for use in data analysis.
We rate GCP rate highly on these points currently, but with advances in technology taking place at breakneck speed, better services may well come out in the near future, and so we can’t say that there’s no chance of returning to on-premises systems depending on various external factors. For that reason, we’ve put plans in place to quickly switch to another cloud system from GCP if the need arises. In the meantime, Big Query boasts unparalleled price-to-performance, therefore it has become an indispensable tool that we use right across the board.
Although we anticipate handling an increased volume of data with IoT Cloud Platform, we are actually not that bothered about the size of data. The system is composed in such a way that loads are not concentrated on any one server, but rather divided among the servers depending on their function. Can you picture each single instance treated as one process? We imagine a near-container usage method as being key, therefore we are thinking of switching to Docker in the future and also introducing Kubernetes, which has been released on Google open source.
What were your impressions when you used GCP in practice?
In terms of functionality, it was good that we didn’t have to worry about data sizing, and happy that it was simple to set the region network configuration. It was also simple to place one database replication in the US, etc.
Also, the Cloud Shell sample scripts which come out make it very easy to use and really useful for automizing infrastructure creation.
When we began using GCP, we already had experience of using other companies’ clouds therefore we didn’t have problems with mapping familiar functions. On the contrary, we felt the UI was sophisticated. Also, though GCP has its own unique functions which might take some time to get used to, there is no particular need to train team members in development other than me. If you understand the basic principles, then you should be able to use it quite easily.
Another thing is the speed of function evolution. It’s only been 2 to 3 months since we began using it for real, but during that time various new functions have been added. We enjoy anticipating which new functions will come out next, but on the other hand, we’ve had to stop creating manuals with screen captures as they change almost as soon as we create one. That being said, since we decided to leave up draft versions of scripts and API our productivity has increased.
On the cost side of things, we feel that per-minute charging has helped raise the company’s awareness of needless expenditure. We’ve become aware of the small things such as only needing to launch actual environment instances when the batch requires it or dropping development environment instances before going home, and so on.
Also, we are pleased to receive advice on optimizing instances to avoid spending unnecessarily on specifications.
What did you enjoy about using Cloud Ace services?
Actually, we were most concerned about the payment method when using GCP. Cloud Ace’s Partner Billing Service was a real help because credit card payments made life difficult for the accounts department’s procedures and that was a stumbling block to using GCP. From now on, we plan to proactively expand IoT Cloud Platform functions. We’re thinking about using Cloud Ace’s systems development service because the launch of our service needs to take place speedily.
What do you plan to develop next?
At the moment we’re developing the IoT Cloud Platform to focus its use on the energy sector, but in the future we’d like to turn it into a generic IoT platform that can be brought to bear on other fields.
Also, one major direction we’d like to go in is to adopt Google’s machine learning and artificial intelligence technology as part of our services. We can increase the IoT Cloud Platform’s added value and contribute to society by using machine learning and AI technology to improve the quality and worth of the information obtained from the large volume of data handled by the IoT Cloud Platform.
This is a translation of an article published by Cloud Ace, Inc.
Available online: http://www.cloud-ace.jp/case/detail17/