Tuesday, September 3, 2024

What is Voluntary Carbon Market?

Voluntary carbon markets are trading places and platforms that are more common where companies and individuals can buy and sell carbon emission reduction, also known as carbon units or carbon credits, to offset their emissions voluntarily. Carbon credits represent measurable and verifiable reductions in greenhouse gas emissions, typically equating to reducing one metric ton of carbon dioxide equivalent. These markets operate independently of government regulations, allowing participants to set prices and choose which projects to support.

As businesses and individuals become more conscious of the need to reduce emissions, voluntary carbon markets are expanding. The market was valued at about $2 billion and is expected to grow to $50 billion by 2030.

Voluntary carbon markets trade various types of carbon credits, including:

  • Removal credits are generated by projects that remove carbon dioxide from the atmosphere, such as reforestation or direct carbon capture and storage. 
  • Reduction credits are generated by projects that prevent greenhouse gas emissions, such as energy-efficient building construction, reducing emissions from deforestation and degradation (REDD), cutting methane emissions from agriculture or renewable energy investments. Sometimes, avoided emissions credits are referred to as a separate type. 

To participate in the voluntary carbon market, companies and individuals purchase carbon credits from project developers responsible for generating high-quality credits. Once purchased, these credits can be used to offset emissions by submitting them to a carbon credit registry or donating them to a carbon offset project.

Carbon credits in voluntary carbon markets offer several benefits, including:

  • Allowing companies and individuals to offset their emissions and reduce their environmental impact.
  • Supporting the development of low-carbon technologies and projects.
  • Creating jobs and boosting economic growth in developing countries.
  • Raising awareness of climate change and encouraging emissions reduction efforts.

However, there are challenges in voluntary carbon markets, such as:

  • Market complexity and opacity can make it difficult for buyers to assess credit quality.
  • There is a risk of double counting, where identical emissions reductions are claimed more than once.
  • Potential adverse social or environmental impacts of some carbon offset projects.

Carbon credits can be purchased in various ways: directly from project developers, through carbon offset providers, or via carbon markets. When selecting a carbon offset provider, it is crucial to ensure they are reputable and that their credits are high quality, often verified by third-party organizations like the Verified Carbon Standard or the Gold Standard.

Once purchased, carbon credits can be used to offset emissions in several ways:

  • Submitting them to a carbon credit registry retires the credits and removes them from the market.
  • Donating them to a carbon offset project supports the project and reduces emissions.
  • Selling them to another company or individual is viable if you have excess credits.

Voluntary carbon markets provide a mechanism for offsetting emissions and reducing environmental impact while supporting low-carbon technologies and economic growth. However, buyers must navigate challenges such as double counting and potential negative impacts by choosing reputable providers and high-quality credits.

Related Articles

Are Voluntary Carbon Credits Assets or Commodities?

Monday, May 20, 2024

Are Voluntary Carbon Credits Assets or Commodities?

In voluntary carbon markets, carbon credits are treated as both commodities and assets. These markets enable companies and individuals to buy and sell carbon credits to offset their emissions. Unlike regulated markets, voluntary carbon markets lack government oversight, allowing participants to set their own prices and choose which projects to support. Viewing voluntary carbon credits as either commodities or assets can lead to different perceived risks and opportunities in trading these credits.

These markets trade in various types of carbon credits, including removal credits and avoidance credits. Removal credits are generated by projects that remove carbon dioxide from the atmosphere, such as tree planting and direct carbon capture and storage. Avoidance credits come from projects that prevent greenhouse gas emissions, such as building energy-efficient structures, reducing deforestation and degradation (REDD), and minimizing methane emissions from agriculture.

Project developers create these credits and ensure their quality, with standards organizations certifying the credits and registries recording them. Depending on the context, carbon credits can be seen as both assets and commodities.

As financial assets, voluntary carbon credits have monetary value and can be traded or held as investments. Market participants can buy, hold, and later sell these credits, often representing them as intangible assets on balance sheets. These credits are typically issued as digital certificates, making them digital assets as well.

As commodities, voluntary carbon credits represent quantifiable units (usually one metric ton of CO2 equivalent) that are standardized and tradable. Their price is influenced by supply and demand dynamics, like traditional commodity markets. Entities that generate carbon credits through reforestation, renewable energy projects, energy efficiency improvements, or carbon capture and storage can sell these credits to buyers seeking to offset their emissions.

The dual nature of voluntary carbon credits—as both investment assets and tradable commodities—demonstrates their versatility in financial and environmental markets. They present various opportunities and risks, shaped by different regulatory frameworks and market dynamics.

Your feedback on viewing voluntary carbon credits as either assets or commodities is welcome.


Related Reads

Artificial Intelligence Potential in the Voluntary Carbon Market

What is Voluntary Carbon Market?

Friday, October 20, 2023

Artificial Intelligence Potential in the Voluntary Carbon Market

Artificial Intelligence (AI) has the potential to revolutionize the Voluntary Carbon Market (VCM) by improving transparency and efficiency, supporting investment and financial innovation, and identifying potential projects. I provide concise, high-level AI use cases in the VCM in this article.  

Improving transparency and efficiency 

  • Data-driven decision-making: AI can aggregate and analyze massive environmental data to enable well-informed decisions about reducing carbon emissions. This data can be used to estimate emissions and develop solutions for offsetting or reducing emissions, particularly in Scope 3 accounting. 
  • Accurate pricing: AI can be used to develop more accurate and timely pricing for carbon credits and provide more transparency in the market. This helps to reduce fraud and ensures fair and transparent transactions. 
  • Enhanced customer support: AI can enhance customer support by employing intelligent agents to address common queries, reducing the need for numerous emails or phone calls. 

Developing carbon projects 

  • Efficient MRV: AI can be used to analyze remote sensing images on a large scale, assisting in measurement, reporting, and verification (MRV) processes, leading to the production of high-quality carbon credits at a reduced cost. Additionally, AI can analyze metered data from solar and renewable sources to enhance MRV for cleaner energy initiatives. 
  • Risk assessment: AI can be used to detect anomalies in data for a carbon project. AI can assist in implementing a risk-based project review process, differentiating between projects with varying levels of risk. These risk scores can streamline the review process, resulting in cost and time efficiency. This automation can scale the due diligence process for carbon offset projects, aiding standards, buyers, and sellers in evaluating project quality and impact. 


Supporting investment and financial innovation 

  • Demand forecasting: AI can forecast the demand for carbon credits and the availability of credits for offsetting, assisting organizations in planning and decision-making to support NetZero objectives. 
  • Efficient trading: AI can streamline carbon credit trading in private and public markets by efficiently matching buyers and sellers, reducing transaction time and costs. 
  • Risk assessment: AI can assess the financial and environmental risks associated with carbon offset investments, enabling more informed investment decisions. 
  • Product development: AI can be used to develop novel financial products, including carbon-linked securities and carbon futures, which channel investments into the VCM. 

Related Reads

Saturday, March 27, 2021

Storing application logs into Postgresql database using Log4j2

Within application, logging is common to provide information about ongoing process, issues and state that helps us understand the processes that are running behind the scenes. These logs provide insights into application state and also help troubleshoot the issues when application is not running as expected. 

Logging into a text file or console is common. But as the application grows and there is a need to further analyze the logs, it is efficient to store the logs into database where information can be searched using query languages and further insights can be generated using power of database technologies. For this, the first step is to log the application into the database. 

In this post, I will go over how to store application logs into database. The post assumes that application is already using Log4j2 to log applications such as logging in console window or in text file a Java application. The code uses a maven project using InteliJ IDEA and postgresql database server on a windows machine. I will be using Java Database Connectivity (JDBC) for connecting to database. 

I will create a database named 'demo' with and create a table named 'log'. using query tool in pgadmin

--Create a database
CREATE DATABASE demo;

--Create a table
CREATE TABLE log(
eventdate timestamp DEFAULT NULL,
logger varchar(100),
level varchar(100),
message varchar(100),
exception varchar(100)
);

The log table has columns for eventdata, logger, level, message and exception. 

The database 'demo' can be connected wtih username and password. So I setup a user 'demouser' with a passoword phrase 'demopassword' and grant all the previllege to this user for log table. 

--Create a user
CREATE USER demouser with ECNCRYPTED PASSWORD 'demopassword';

--Grant privilege to user on log table
GRANT ALL PRIVILEGES ON TABLE log TO demouser;

In log4j2 configuration, add appender for jdbc with name of the appender as 'databaseAppender'.

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
<Appenders>
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"/>
</Console>

<Jdbc name="databaseAppender" tableName="log">
<DriverManager connectionString="jdbc:postgresql://localhost:5432/demo" driverClassName="org.postgresql.Driver" username="demouser" password="demopassword" />
<Column name="eventdate" isEventTimestamp="true" />
<Column name="level" pattern="%level" isUnicode="false" />
<Column name="logger" pattern="%logger{36}" isUnicode="false"/>
<Column name="message" pattern="%message" isUnicode="false" />
<Column name="exception" pattern="%exception" isUnicode="false" />
</Jdbc>

</Appenders>
<Loggers>
<Logger name="Main.Bootstrap" level="trace" additivity="false">
<AppenderRef ref="Console"/>
<AppenderRef ref="databaseAppender"/>
</Logger>
<Root level="trace">
<AppenderRef ref="Console"/>
<AppenderRef ref="databaseAppender"/>
</Root>
</Loggers>
</Configuration>

In the configuration, I am specifying the name of the table where the logs will be inserted. In this case, the name of that table is 'log'. The connectionstring property includes the location and source of the database, the jdbc driver which is specific to the postgresql database and name of the database. The column name property is used to map the log message to table columns.  

In the logger section, I am using 'AppenderRef' to also log the information to database (i.e., to  databaseAppender).

Since the connection requires a postgresql driver, I am going to add depenency as below as in any Maven project:

<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>42.2.10</version>
</dependency>

Now that I have set up the logging, I will now focus on application that will utilize the logger. The application code is simple.I have a method where I will log some information, warning and exception to demonstrate how different logs are inserted into database table.  

public static void main(String[] args) throws Exception {
logger.info("Starting the demo. ");

//Lets throw exception but before that let's warn.
logger.warn("Exception must occur now.");
try {
throw new Exception("Throwing exception for demonstration");
}
catch(Exception e){
logger.error("Exception occured.", e.getMessage(), e);
}

logger.info("Ending the demo. ");
}

When I run the application, the applicaiton logs information and exception in console. Since I have configured log4j2 to append the information into database, the logs are also visible in database table. 




The datbase table shows the log level, the class that created the log, message and actual exception, when applicable.

Sunday, January 10, 2021

Creating, Modifying and Updating Docker Image from Container

Background

Containers enable running of multiple applications instanaces on a single host machine. As part of software development process, sometimes we need to configure containerized solution and create iterative improvements. In this process, a docker image is started with base image, and phased wise improvement is made by adding all the needed component to the image before finalizing the package. In that case we start with bare metal image and continuously add additional layer in the process and pass the intermediate product to as different version. In this process, we must create image from a container after modifying the image and then pass the image along for further modification. In this article, we are going to look at the process of creating image, running into container and modifying image and finally distributing the image using public repository and file-based approach.

In addition to a will to go through this technical jargon, the following is needed to do hands on.

  • A Docker Hub account – A free registry account can be created at  https://hub.docker.com/
  • Docker – Docker Desktop available at https://www.docker.com/products/docker-desktop can be installed. 
  • Terminal (e.g. PowerShell) - Any terminal can be used to execute docker command. In the example, we will be using PowerShell. In the PowerShell, $ is used to define a variable and # is used to start a comment

The command and example in these articles were made on Windows 10 machine with Windows PowerShell  and Docker version 20.10.2.

Login to Docker Hub

Once the docker starts, we can login to the docker using DockerId. DockerID can be used as username while authenticating to a Docker registry for docker images. Docker client can be asked to connect to the registry to download or upload images.  Docker Hub is one of the registries. An account can be created at https://hub.docker.com/

Let’s start a PowerShell or Terminal window and log on using:

docker login --username benktesh  #password will be entered in the prompt

When the above command is run and user is authenticated, a successful login prompt show.


Create Docker Image

Docker image can be created from a publicly available image.  Here we are going to get an Ubuntu 18.04 image from Docker Hub with a pull command. Note that images can be created from the scratch. Please Creating Docker Image for an illusration about creating image. 

Docker pull ubuntu:18.04


After the image has been pulled, we can verify that the image exists by executing image ls (note it is letter 'L' in lower case and 'one'  in 'ls') command.  

Docker image ls


At the same time, if the docker desktop is installed, the list of images shows the existence of the image. 

Run Image as Container

So far in the process, we have downloaded an image of Ubuntu 18.04 locally. This is similar to a situation we have got a Virtual Machine but is not running. To run this image, we need to run inside a container. The image can be run as container with the following command by specifying a name of the container suchas ‘ubuntu_container’ based on image ‘ubuntu’ with tag ’18.04’ as below:

docker run -d --name ubuntu_container -i -t ubuntu:18.04

--name argument is defining the name of the container and ubuntu:1804 is represent repository:tag information of the image.  Argument -d ensures the container runs in detached mode. The above command can be made reusable by using variables. For example, in the PowerShell,

We can define a variable for container name and image and use it.

$container = “ubuntu_container” #defines variable for container name

$image = “ubuntuL18:04” #defines a variable for image label

After such variables are defined, the command can use the variables as below: 

docker run -d --name $container -i -t $image


The above command returns runs a container and returns the id of the container. We can check to see what is inside the container. To do this, we can open a bash shell in the container by executing

docker exec -it $container bash

which opens the bash prompt where we can execute cat /etc/os-release command to find out the release information on of the Ubuntu image that is running in the container. The result showed that we are running Version 18.04.5 (Bionic Beaver) of Ubuntu.

We verified that we Ubuntu 1804 is locally running as a container. Issuing exit command gets us out of the bash.

Modify Docker Image

After we have container running, we do ‘stuffs’ on it. For example, we add more layers and application to the container to make it more useful. For example, we could add one application and distribute the image with application to customers or provide it as a value-added image for later use. To mimic this, we are going to modify this container and create a new image out of this container and save to docker repository for public distribution.

For illustration, in the current container when we execute lsb_release -a, the response is that ‘command not found’ that means the container does not provide features for lsb_release.  Next, we will update this container by adding lsb_release and make it re-usable for all the applications. 


As part of the modification, we will update the base version, and install lsb_release package and mark the modification complete. We will first update the Ubuntu with apt update. Then we will install the lsb-core using apt install lsb-core. After the install completes, we can execute lsb_release -a to find the latest information about the Ubuntu.


We can now see that lsb_release -a is available. So essentially, we have made small update to the base Ubuntu image.

Like we installed lsb-core, applications can be installed in this image as part of the containerized solutions. To this document, we consider that we have now updated our image that can be reused by others thus is ready for distribution.

Create a new image from modified container for distribution

So far, we have got one image which is Ubuntu 18.04 that was pulled from the docker repository. We created a container and updated the images and installed lsb-core into that container. We inspect the status of images by executing docker images and status of container executing docker ps -a to see the list of images and containers, respectively. Note that ps stands for process status. 

We can create a new image out of the currently running container image which includes the updates we have made by executing docker commit $container which returns sha256 string of the created image id. Executing docker images shows the newly created image with id of returned sha256. At this time, we have created a new image based on the updated container image.

We can tag the newly created image by providing an appropriate name. We will create a variable to store a new name ‘ubuntu_modified:18.04”

$newImage = "ubuntu_modified:18.04"

We will now commit to create a new image named ‘ubuntu_modified:18:04’.

docker commit $container $newImage

 

The command returns a sha hash to indicate the id of the newly created image. This image can be seen with docker images command


In the list we can see the newly created image named ‘ubuntu_modified’ with image id matching the sha256 identified and shows the time of creation. Note the size of the image – for new container – which is larger than the original image because we had installed new update to the image.

Now that we have created a fresh image out of the modified container, the older container can be removed. Frist we will stop the container and then remove it.

docker stop $container

docker rm $container

We will verify that the container indeed is removed by running docker ps command. 


Now that we have deleted old container, we are going to create a new container named “modified_container” from the modified image. We can now run another container with newly created image.  Let us create a new variable for new container

$newContainer = “modified_container”

Start a new container with

docker run -d --name $newContainer -i -t $newImage

Open a bash shell on the newly created container.

docker exec -it $newContainer bash

After we execute lsb_release -a command, note that the command returned the result without having to need for an update. Executing exit will get us out of the bash.


As before, lets stop and remove the newly created container as we do not need it.

docker stop $newContainer

docker rm $newContainer

Distributing Docker Image 

Now that we have container image created, we are ready for distribution of the image. There are two ways to distribute the image. We can distribute it by pushing the image to public or private repository. For illustration, we are going to use Docker Hub Repository (https://hub.docker.com/repositories) to place the image for distribution.

Distribute using repository

First, we will tag the image to add the repository information. The docker.io is default repository. We will create a variable $repositoryTag to have a value of benktesh/ubuntu:1804 and tag the latest image and execute docker push command

$repositoryTag = “benktesh/ubuntu:18.04”

docker tag $newImage $repositoryTag

docker push $repositoryTag


and a simple docker push will the content to the outside world as it is publicly available in the repository which we can verify by going to the Docker Hub at navigating to benktesh/ubuntu (https://hub.docker.com/repository/docker/benktesh/ubuntu). 


Now the file is in repository docker pull command can be used to get an image for further use. 

Distribute by creating tar file 

Another way is to create a tar file from either image or container. Here We are going to look at option where docker image can be saved as tar file. 

docker save -o ubuntu.tar $repositoryTag 

Command above creates a file ubuntu.tar with a tag defined in variable $repositoryTag which can distributed. The tar file can be loaded into docker to generate image with simple docker command as below:  

docker load -i ubuntu.tar

Now the file is loaded, it can be used.

Conclusion

In this post, we illustrated how to create a docker image by using a base image from public repository, run that image in a container, and update the container with necessary installs and upgrades and then create an updated image from the containerized image modification. We also showed how to push the image to registry for distribution and looked at file-based approach to save and load images tar file.  

Related Articles

Set up development environment for Azure Kubernetes Service (AKS) deployment

Create a Docker image of an application

Deploy an application image to Kubernetes

Store image to the Azure Container Registry (ACR)

Deploy application image to Azure Kubernetes Service (AKS)

Azure Kubernetes Service (AKS) Deployment Recipes

Saturday, January 2, 2021

Passing a function as parameter in Java

Passing function as parameter allows reusing of code that is in the parameters. In addition, this technique is also useful while implmenting different logic speciallywhen behavior parameterization is needed. This approach is also referred to passing a function or function pointer as an argument in some other programming languages. In this post, I am illustrating three ways to achieve this with a simple example. 

Conventional Method

First, I will use a conventional approach of creating interface, implementation and make use of the implemented interface for a method. In the second approach, I will use an anonymous class that implements interface. In the third approach I will use lambda which implements the needed function. These approaches go from verbose to less verbose. 

To get started I will create an interface with one method 'doJob'. The method takes JsonObject as parameter and does any job and returns string. The parameter can be null and so can be the return type. But only for illustration I am using the parameter and return type.


package Main;

import io.vertx.core.json.JsonObject;

@FunctionalInterface
public interface IFunctionPointer {
String doJob(JsonObject data);
}

The interface is declared with '@FunctionlInterface' to ensure that no multiple abstract method exists in the interface which will ensure that the lambda method can work with this interface. 

The "FunctionPointer" class implements the IFunctionPointer interface which simply adds 'doJob': true to data and returns the string representation of the input JsonObject.
  
package Main;

import io.vertx.core.json.JsonObject;

public class FunctionPointer implements IFunctionPointer {
@Override
public String doJob(JsonObject data) {
if(data == null){
data = new JsonObject();
}
data.put("Title", "Using function");
data.put("doJob", true);
return data.encodePrettily();
}
}


The doJob method is used by a consumer class that uses the "doJob" method. For example, I have a class 'UseFunctionPointer' which has a method 'useFunctionPointer' that makes use of the abstract method used in the interface.  


package Main;
import io.vertx.core.json.JsonObject;

public class UseFunctionPointer {

public static void main(String[] args) {
//Method 1: Use conventional
IFunctionPointer functionPointer = new FunctionPointer();
useFunctionPointer(new JsonObject(), functionPointer);
}

public static void useFunctionPointer(JsonObject data, IFunctionPointer p){
String result = p.doJob(data)
;
System.out.println(result);
}
}


When the UseFunctionPointer.main is executed, the doJob method in FunctionPointer class is called. The doJob method adds a title and doJob:true to the JsonObject and returns which essentially printed in the console as below:


{
  "Title" : "Using function",
  "doJob" : true
}


Now if the behavior of the doJob needs to be changed, it can simply be changed in the FunctionPointer class and rest of the code does not need to be changed. This is the very much conventional approach. 


Using Anonymous Class

Instead of using an implementer of the interface as done in the first approach, I could use anonymous class and put body for 'doJob' method at the time of class declaration. 


package Main;

import io.vertx.core.json.JsonObject;

public class UseFunctionPointer {

public static void main(String[] args) {
//Method 2: Use anonymous class
useFunctionPointer(new JsonObject(), new IFunctionPointer() {
public String doJob(JsonObject data) {
data = data == null ? new JsonObject() : data;
data.put("Title", "Using Anonymous class");
data.put("doJob", true);
return data.encodePrettily();
}
});

public static void useFunctionPointer(JsonObject data, IFunctionPointer p) {
String result = p.doJob(data);
System.out.println(result);
}
}


When the main method executes, the outcome will be as below: 


{
  "Title" : "Using Anonymous class",
  "doJob" : true
}


By instantiating an anonymous class from the interface IFunctionalPointer and implementing the method at the same time, it provided flexibility in the changing the behavior of the method 'doJob'. In the code above, the title of the data is changed to 'Using Anonymous class'. It must also be noted that the there is no need of a FunctionPointer class in the second approach.  


Using lambda

Using lambda does not even require declaring a class as was done in the previous approach. If there is correct method in the interface declared with @FunctionalInterface, an implementation can be provided as below:  



package Main;
import io.vertx.core.json.JsonObject;

public class UseFunctionPointer {

public static void main(String[] args) {

//Method 3: Use lambda
useFunctionPointer(new JsonObject(),
//data -> { //is also valid here
(JsonObject data) -> {
data.put("Title", "Using Lambda");
data.put("doJob", true);
return data.encodePrettily();
}
);
}

public static void useFunctionPointer(JsonObject data, IFunctionPointer p) {
String result = p.doJob(data);
System.out.println(result);
}
}


The lambda is much more concise and easier to read. The outcome of the above lambda-based approach is shown below. 


{
  "Title" : "Using Lambda",
  "doJob" : true
}

Conclusion

Passing function as a parameter provides an easy way to achieve behavioral parameterization where the logic is changed based on the functional constructs. In all the three approaches, useFunctionPointer method was called. This method calls 'doJob' method of the method, and lambda where the doJob method was passed as a parameter. Among all three methods, the verbosity is reduced from conventional to lambda-based approach while achieving the same functionality. Just to note, the lambda-based approach is only available for Java 8 and beyond.