<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=1205042&amp;fmt=gif">
Home
Blog

Q&A with Aker Carbon Capture: Embracing an Industrial DataOps philosophy

· ·

Share on:

We talked to Simen Ringdahl, a data scientist at Aker Carbon Capture, about their experience instilling an Industrial DataOps philosophy into Aker Carbon Capture's modular carbon capture plant.

Q: What do you focus on at Aker Carbon Capture?

Aker Carbon Capture operates a mobile, modular carbon capture plant called the Mobile Test Unit, or the MTU for short. It’s been “on tour” since 2009, traveling worldwide to test our technology at facilities in Europe and North America.

We’ve built a portfolio of data, including pressure, temperatures, CO2 capture rate, and much more. My role is to make sense of the real-time data from the MTU and the historical data. Everything is ingested into Cognite Data Fusion, where we run most of our analyses.

Q: How does Cognite Data Fusion help you in your day-to-day work?

To visualize the data and help my colleagues on-site working with the MTU, I built a dashboard in Grafana. Because our data in Cognite Data Fusion stretches back to 2009, you can easily click back to any time the MTU was running and collecting data.

The dashboard also features the latest entries in the event log, so you can see the latest changes, measurements, etc.

The dashboard took me about a month to build. I couldn’t have done it without Cognite Data Fusion. It would have been difficult to find as good of a place to store the data and make it accessible.

We have a similar dashboard in OSIsoft PI, but a Grafana dashboard proved to be a better access point for data. Having the dashboard available on your smartphone also means people on site don’t need to go back and forth between the MTU and the office to check the sensor values.

I also built a simple visualization that combines an illustration of a carbon capture plant and live sensor values. We want to use it as a learning tool and display it on a screen in the office to give people who aren’t working on the technology side of the business an idea of what we’re talking about when we talk about temperatures, pH values, and so on.

Once the data is in Cognite Data Fusion, it’s easy to build applications on top of it—so easy that someone like me can create a foundation for others to build their own solutions. Now, my coworkers can create personalized dashboards themselves depending on how they prefer to have data visualized and how they use it.

That’s the DataOps philosophy: valuable data that is easily accessible by all.

Q: What does the future hold for your team?

Right now we have a single facility, but what happens in the future when we have 20? How will the data flow work? How should we organize information? How should we build things so that we can reuse the data models?

It’s a question of scalability. If you get the solution right the first time, it shouldn’t be much more work than copying and pasting the second time around. That’s something Cognite is helping us with.
In the future, we want to conduct fleet analyses and make our facilities learn from one another. We see a major potential in Cognite Data Fusion, because it’s so easy to explore the same kind of data originating from different facilities.

We also want to create a simulator—essentially a virtual carbon capture plant—that we can run at the same time as a real-life plant. That would give us a powerful tool for understanding how to achieve the optimal carbon capture rate. It would also enable us to test what would happen to the capture rate if, for example, you increased the temperature in the plant. Then we would be able to share those insights and recommendations with people on site and capture even more CO2.
This is something Cognite Data Fusion will help us do, because you can easily stream the data to a simulator—or just run the whole simulation in Cognite Data Fusion.

Aker Carbon Capture’s technology and the HSE friendly amine blend that we use to capture CO2, is built on years of systematic research. What if we could build a machine learning algorithm to automate that research? Instead of looking at one molecule at a time, what if we could look at millions?

Share on: