Java Archive - CraftCoders.app https://craftcoders.app/category/java/ Jira and Confluence apps Wed, 14 Aug 2024 13:29:10 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 https://craftcoders.app/wp-content/uploads/2020/02/cropped-craftcoders-blue-logo-1-32x32.png Java Archive - CraftCoders.app https://craftcoders.app/category/java/ 32 32 Solving the Knapsack Problem with the Jenetics Library https://craftcoders.app/solving-the-knapsack-problem-with-the-jenetics-library/ Thu, 13 May 2021 10:46:48 +0000 https://craftcoders.app/?p=1458 Read More]]> According to its official documents, Jenetics is a library that is used for programming evolutionary algorithms written in Java. Jenetics is implemented using the Java Stream interface, so it works smoothly with the rest of the Java Stream API. Evolutionary algorithms have their roots in biology, as they use mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. If you want to learn more about the theory behind evolutionary algorithms, I’d suggest reading Introduction to Evolutionary Algorithms first.

Disclaimer: This blog post is based on Introduction to Jenetics Library from Baeldung. But it is using the current library version (6.2.0) and a more complex example: The knapsack problem without using the libraries provided classes for the problem.

The knapsack problem

Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.

Wikipedia

Defining the problem in code

In the following example, we have a class called “Knapsack” that represents our problem. The class defines items that consist of a size and a value (possibleKnapsackItems). These items are initialized with random values between 0 and 10 and put in a list to represent the items we can put into our knapsack. Furthermore, the class defines the maximum size the knapsack can hold. Attention: Don’t mix up the size of the knapsack (Knapsack.getKnapsackSize) with the number of items that we could put in the knapsack (Knapsack.getItemCount). The items that we actually put into the knapsack will be defined later in our evolutionary algorithm.

public final class Knapsack {
    private final List<Item> possibleKnapsackItems; // items that *might* end up in the knapsack, depending on chromosome
    private int knapsackSize;

    public Knapsack(List<Item> possibleItems, int knapsackSize) {
        this.possibleKnapsackItems = possibleItems;
        this.knapsackSize = knapsackSize;
    }

    public static Knapsack initializeWithRandomItems(int size, int knapsackSize) {
        Random random = new Random(123);
        List<Item> items = Stream.generate(() -> 
                new Item((int) (random.nextDouble()*10),(int) (random.nextDouble()*10)))
                .limit(size)
                .collect(Collectors.toList());
        return new Knapsack(items, knapsackSize);
    }

    public Item getItemByIndex(int index) { return this.possibleKnapsackItems.get(index); }
    public int getItemCount() { return this.possibleKnapsackItems.size(); }
    public int getKnapsackSize() { return this.knapsackSize; }

    public static final class Item {
        private final int size;
        private final int value;

        public Item(final int size, final int value) {
            this.size = Requires.nonNegative(size);
            this.value = Requires.nonNegative(value);
        }

        public int getSize() { return size; }
        public int getValue() { return value; }
    }
}

Let’s get started with the Jenetics Library

In order to use Jenetics, we need to add the following dependency into our build.gradle:

implementation 'io.jenetics:jenetics:6.2.0'

Next we create a runnable class App that will use the Jenetics library and our Knapsack class to run a genetic algorithm. First, let’s make use of our previously created class: We create a knapsack with a size of 100 and 80 items from which we can pick.

public class App {
    private final static int ITEM_COUNT = 80;
    private final static int KNAPSACK_SIZE = 100;
    private final static int POPULATION_SIZE = 500;

    private final Knapsack knapsack = Knapsack.initializeWithRandomItems(ITEM_COUNT, KNAPSACK_SIZE);

    public static void main(String[] args) {
        new App().run(POPULATION_SIZE);
    }

    public void run(int populationSize) {
        // TODO Run the genetic algorithm
    }
}

Let’s work on the run() function. We need to convert the Knapsack problem into another representation that a genetic algorithm can work with, namely a chromosome. And indeed we can transform it into a so-called binary problem, where each one represents an item we put into the knapsack, each zero represents an item we don’t put in the knapsack.

Using the Jenetics library we can create a BitChromosome with a length of 80 which is equal to the number of items we can choose from (ITEM_COUNT) and a probability of having 1’s in the chromosome equal to 0.3. These BitChromosomes are accessible via a factory, meaning we can generate as many randomly initialized chromosomes as we want our population size to be.

final Factory<Genotype<BitGene>> gtf =
        Genotype.of(BitChromosome.of(this.knapsack.getItemCount(), 0.3));

Now, let’s create the execution environment:

final Engine<BitGene, Integer> engine = Engine
        .builder(this::fitness, gtf)
        .populationSize(populationSize)
        .build();

The Engine will run our genetic algorithm and needs a couple of information:

  1. The factory we just created, that produces our random chromosomes
  2. The number of random chromosomes we want to create and compare (called populationSize)
  3. Last but not least, a fitness function which we didn’t define, yet

The Fitness Function

The fitness function calculates the fitness of each chromosome. In the case of the knapsack problem, the fitness is equal to the sum of the values of the individual elements that we place in our knapsack (i.e. items with corresponding one in the chromosome). How to put that into code, is something you can think about now 😉

private Integer fitness(Genotype<BitGene> gt) {
    BitChromosome chromosome = gt.chromosome().as(BitChromosome.class);
    int fitness = 0;
    // TODO: Calculate fitness
    return fitness;
}

A first run

In the final step, in our run function, we add some basic statistics, start the evolution and collect the results:

final EvolutionStatistics<Integer, ?> statistics = EvolutionStatistics.ofNumber();
final Phenotype<BitGene, Integer> best = engine.stream()
        // Truncate the evolution stream after 7 "steady"
        // generations.
        .limit(bySteadyFitness(10))
        // Update the evaluation statistics after
        // each generation
        .peek(statistics)
        // Collect (reduce) the evolution stream to
        // its best phenotype.
        .collect(toBestPhenotype());

System.out.println(statistics);
System.out.println(best);

If you put everything together and implemented the fitness function correctly, you should end up with a result looking like this:

+---------------------------------------------------------------------------+
 |  Time statistics                                                          |
 +---------------------------------------------------------------------------+
 |             Selection: sum=0,029213700000 s; mean=0,000811491667 s        |
 |              Altering: sum=0,120244900000 s; mean=0,003340136111 s        |
 |   Fitness calculation: sum=0,054355500000 s; mean=0,001509875000 s        |
 |     Overall execution: sum=0,199033900000 s; mean=0,005528719444 s        |
 +---------------------------------------------------------------------------+
 |  Evolution statistics                                                     |
 +---------------------------------------------------------------------------+
 |           Generations: 36                                                 |
 |               Altered: sum=133.010; mean=3694,722222222                   |
 |                Killed: sum=0; mean=0,000000000                            |
 |              Invalids: sum=0; mean=0,000000000                            |
 +---------------------------------------------------------------------------+
 |  Population statistics                                                    |
 +---------------------------------------------------------------------------+
 |                   Age: max=14; mean=2,183056; var=7,349621                |
 |               Fitness:                                                    |
 |                      min  = 0,000000000000                                |
 |                      max  = 188,000000000000                              |
 |                      mean = 134,464166666667                              |
 |                      var  = 4503,017550280571                             |
 |                      std  = 67,104527047589                               |
 +---------------------------------------------------------------------------+
 [11101010|00000100|11000101|10001000|10001111|10100000|01010010|10110000|11000101|10000101] -> 188

If so, congratulations! You made it.

Further Optimiziation

So up until now, we told the engine to learn using 500 generations and let it decide on itself how to do mutation, recombination, and selection. Of course, if you want to improve the quality of your best phenotype you can configure these things yourself. An easy thing to do is to increase the number of generations to i.e. 5000 and your results will probably improve. But you can also tweak several things like mutation yourself:

final Engine<BitGene, Integer> engine = Engine
        .builder(this::fitness, gtf)
        .populationSize(populationSize)
        .survivorsSelector(new TournamentSelector<>(5))                    
        .offspringSelector(new RouletteWheelSelector<>())                   
        .alterers(
            new Mutator<>(0.115),
            new SinglePointCrossover<>(0.16))
        .build();

But to gain some real improvements using your own configuration is something that is pretty time consuming and would need another blogpost, so I’ll leave that to you 😀

Greetings,

Domi

]]>
Clean Code: The hard facts and figures https://craftcoders.app/clean-code-the-hard-facts-and-figures/ Mon, 14 Jan 2019 20:53:41 +0000 https://craftcoders.app/?p=891 Read More]]> A couple of weeks ago I began to read Uncle Bob’s old developer bible: Clean Code. I was excited about the presented concepts and put a lot of thought into applying them into code. One day I was lying on the couch of our office, reading Chapter 2: Meaningful Names. Meanwhile, I overheard our team discussing the best troll name for a new database. Another day, I was reading the next chapter. Chapter 3: Functions should be small and do one thing only. Back at my desk, I found myself scrolling through functions with hundreds of lines of code.

Although most teams I know try to produce clean code, it seems to be a hard thing to keep a project clean while it grows. I began to wonder: How much clean code is really out there in the wild? Followed by: How can a project even be considered as clean? So I picked some famous open source projects and analyzed them!

What makes a project clean?

First, let’s summarize what I did: My first intent was to check a static code analysis tool like SonarQube, but I could hardly find an open-source project which also published the results of such tooling. This is when Metrilyzer was born. An analysis tool of mine (private projects again^^) which can read almost every Java-based project to do some data analysis on it. At first, I focused on the following metrics:

  • Classes per package
  • Lines per class
  • Methods per class
  • Lines per method
  • Parameter per method

Of course, they are not enough to consider a project as “cleanly coded” but in my opinion they give a good indication on code modularity and compliance with single responsibility principle. This is one of the hardest things to accomplish from my point of view. So using these metrics you can at least see clearly when a project is not clean coded. 😉 Here are the results.

Cassandra, ElasticSearch, Spring Boot – The hard figures

The four tested projects are Cassandra 3.11, ElasticSearch 6.5, Spring Boot 2.1 and Neuronizer Notes (an Android app of mine). In the boxplots you can see the number of lines per class (y-axis) per project (x-axis). N is the number of classes in the project (which could be analyzed by Metrilyzer). The maximum values are somehow obscured so that the rest of the plot looks better, but you can still read them in the table. If you don’t know how boxplots work look here: What a Boxplot Can Tell You about a Statistical Data Set

You can see that most of the classes are very small and more than 75% of all classes are smaller than 100 lines of code. Despite every project having a couple of huge classes. It seems like the bigger the project, the longer the longest class is. Not very surprising, but things get more interesting when you compare different metrics. Let’s take a look at lines per method for example:

Like the classes, most of the methods are very small and more than 75% are smaller than 15 lines per method. Despite a large number of methods Spring Boot does a very good job at keeping them small. With a maximum of 54 lines per method. Also interesting is the correlation between N in the two metrics (which is the average number of methods per class):

  • Cassandra: 19393 methods in 4162 classes = 4,65 methods per class
  • Elastic Search: 36027 methods in 8021 classes = 4,51 methods per class
  • Spring Boot: 14140 methods in 5963 classes = 2,37 methods per class
  • Neuronizer Notes: 571 methods in 173 classes = 3,30 methods per class

I have to mention that getter and setter methods are excluded so in reality, the numbers are slightly higher (see metrics at the end). Neuronizer which is a small application has an easy time at keeping classes and methods small. As you can see Cassandra and Elastic Search do have a harder time. But Spring Boot is doing very well in comparison to the others. They have even smaller methods than my little android app. Okay, now let’s take a look at the most problematic classes.

Pinning down problems

What you can see here are the five most biggest classes for each project.

  Lines per class
Cassandra org.apache.cassandra.service.StorageService:     4300
org.apache.cassandra.cql3.validation.operations.SelectTest:     2427
org.apache.cassandra.service.StorageProxy:     2244
org.apache.cassandra.db.LegacyLayout:     2160
org.apache.cassandra.db.ColumnFamilyStore:     2136
Elastic Search org.elasticsearch.index.engine.InternalEngineTests:     4653
org.elasticsearch.index.translog.TranslogTests:     2804
org.elasticsearch.index.shard.IndexShardTests:     2652
org.elasticsearch.index.engine.InternalEngine:     2631
org.elasticsearch.index.shard.IndexShard:     2566
Spring Boot org.springframework.boot.context.properties.ConfigurationPropertiesTests:     1509
org.springframework.boot.test.json.JsonContentAssert:     1277
org.springframework.boot.SpringApplicationTests:     1269
org.springframework.boot.SpringApplication:     1267
org.springframework.boot.test.web.client.TestRestTemplate:     1234
Neuronizer de.djuelg.neuronizer.presentation.ui.fragments.TodoListFragment:     458
de.djuelg.neuronizer.presentation.ui.fragments.PreviewFragment:     285
de.djuelg.neuronizer.presentation.ui.fragments.ItemFragment:     251
de.djuelg.neuronizer.storage.TodoListRepositoryImpl:     248
de.djuelg.neuronizer.storage.TodoListRepositoryImplTest:     214

What I recognized at first were the test classes. Since the teams out there (at least those I have been part of) care less about test code quality vs. productive code quality it makes sense they can get very long. You can also see that long classes lead to long test classes. Elastics InternalEngine and InternalEngineTests for example. As test classes grow it gets harder and harder to keep them maintainable, so a well thought-out model for test classes should be applied. Regarding large test classes, I can recommend the article Writing Clean Tests – Small Is Beautiful.

Another important thing you can learn from this table is where the application has not been modeled carefully. Cassandras StorageService, for example, has a very generic name and became the biggest god class in the project. Elastics Engine and InternalEngine had a similar destiny. These classes could easily be separated in a couple of subclasses, but as they are now they just cannot be clean.

For the interested guys out there, here are the other metrics in an uncommented form. They will be mentioned in the Conclusion though. All visualizations have been done using goodcalculators.com.

Conclusion

Probably, you already thought in the beginning: You can’t put hard figures on the rules of Clean Code like “Oh boy, this class here is 5 lines too long! Are you dumb?” But you can use these metrics as an orientation. You can aim for the Pareto principle. For example: Refactor the highest 20% of each metric. Try to be in the lower 80% with all your new code. When you reach 100 lines of code in a class, for example, there could be better ways to modularize that piece of code. Here are the 80% boundaries for each metric (based on all analyzed projects):

  • 80% of all classes are smaller than 100 lines
  • 80% of all methods are smaller than 12 lines
  • 80% of all packages have less than 25 classes
  • 80% of all classes have less than 8 methods
  • 80% of all methods have less than 3 parameter

Despite this being a rather shallow analysis on the topic of clean code, the results were quite interesting. Using Metrilyzer on a single project with tailored visualizations can be even more helpful to improve modularity and to locate trouble spots. Maybe you want to give it a try to analyze your own projects. If so, I would be glad to hear from you 🙂

Greets, Domi

]]>
A deep dive into Apache Cassandra – Part 1: Data Structure (was not continued) https://craftcoders.app/a-deep-dive-into-apache-cassandra-part-1-data-structure/ Mon, 01 Oct 2018 19:14:56 +0000 https://craftcoders.app/?p=749 Read More]]> Hey guys,

during my studies I had to analyze the NoSQL database Cassandra as a possible replacement for a regular relational database.
During my research I dove really deep into the architecture and the data model of Cassandra and I figured that someone may profit from my previous research, maybe for your own evaluation process of Cassandra or just personal curiosity.


I will separate this huge topic into several posts and make a little series out of it. I don’t know how many parts the series will contain yet, but I will try to keep every post as cohesive and understandable as possible.

Please forgive me, as I have to introduce at least a couple of terms or concepts I won’t be able to describe thoroughly in this post. But don’t worry, I will be covering them in an upcoming one.

What is Cassandra?

Cassandra is a column-oriented open source NoSQL database whose data model is based on Big Table by Google and its distributed architecture on Dynamo by Amazon. It was originally developed by Facebook, later Cassandra became an Apache project and is now one of the top-level projects at Apache. Cassandra is based on the idea of a decentralized, distributed system without a single point of failure and is designed for high data throughput and high availability.

Cassandras Data Structure

I decided to begin my series with Cassandras data structure because it is a good introduction to the general ideas behind Cassandra and a good foundation for future posts regarding the Cassandra Query Language and the distributed nature of it.

I try to give you an overview how data is stored in Cassandra and show you some similarities and differences to a relational database, so let’s get right to it.

Columns, Rows and Tables

The basic component in Cassandras data structure is the column, which consists classically of a key/value pair. Individual columns are combined in a row and uniquely identified by a primary key. It consists of one or more columns and the primary key, which can also consist of one or more columns. To connect individual rows describing the same entity in a logical unit, Cassandra defines tables, which are a container for similar data in row format, equivalent to relations in relational databases.

the row data structure in Cassandra

However, there is a remarkable difference to the tables in relational databases. If individual columns of a row are not used when writing to the database, Cassandra does not replace the value with zero, but the entire column is not stored. This represents a storage space optimization, so the data model of tables has similarities to a multidimensional array or a nested map.

table consisting of skinny rows

Skinny and Wide Rows

Another special feature of the tables in Cassandra is the distinction between skinny and wide rows. I only described skinny rows so far, i.e. they do not have a complex primary key with clustering columns and few entries in the individual partitions, in most cases only one entry per partition.

You can imagine a partition as an isolated storage unit within Cassandra. There are typically several hundred of said partitions in a Cassandra installation. During a write or read operation the value of the primary key gets hashed. The resulting value of the hash algorithm can be assigned to a specific partition inside the Cassandra installation, as every partition is responsible for a certain range of hash values. I will dedicate a whole blog post to the underlying storage engine of Cassandra, so this little explanation has to suffice for now.

Wide rows typically have a significant number of entries per partition. These wide rows are identified by a composite key, consisting of a partition key and optional clustering keys.

table consisting of wide rows


When using wide rows you have to pay attention to the defined limit of two billion entries in a partition, which can happen quite fast when storing measured values of a sensor, because after reaching the limit no more values can be stored in this partition.


The partition key can consist of one or more columns, just like the primary key. Therefore, in order to stay with the example of the sensor data, it makes sense to select the partition key according to several criteria. Instead of simply partitioning according to for example a sensor_id, which depending on the number of incoming measurement data would sooner or later inevitably exceed the limit of 2 billion entries per partition, you can combine the partition key with the date of the measurement. If you combine the sensor_id with the date of the measurement the data is written to another partition on a daily basis. Of course you can make this coarser or grainer as you wish (hourly, daily, weekly, monthly).

The clustering columns are needed to sort data within a partition. Primary keys are also partition keys without additional clustering columns.

Several tables are collected in to a keypsace, which is the exact equivalent of a database in relational databases.

Summary

The basic data structures are summarized,

  • the column, consisting of key/value pairs,
  • the row, which is a container for contiguous columns, identified by a primary key,
  • the table, which is a container for rows and
  • the keyspace, which is a container for tables.

I hope I was able to give you a rough overview of the data structure Cassandra uses. The next post in this series will be about the Cassandra Query Language (CQL), in which I will give you some more concrete examples how the data structure affects the data manipulation.

Cheers,

Leon

]]>
Spring Cloud Netflix Sidecar Tutorial https://craftcoders.app/spring-cloud-netflix-sidecar-tutorial/ Mon, 20 Aug 2018 21:38:21 +0000 https://craftcoders.app/?p=538 Read More]]> Introduction

Hey guys,
this week’s post is about Microservices created with the various Spring Cloud frameworks and how to include services written in non-JVM programming languages into the Spring Cloud ecosystem using Spring Cloud Netflix Sidecar. Please be aware that this tutorial is specifically written for people who know the architectural style of Microservices and are creating applications using the various Spring Cloud frameworks or plan to do so.

If you don’t know what Microservices are, please read the excellent blog post from Martin Fowler regarding Microservices. Basically every book, article or scientific paper (and my bachelor thesis) about this architectural style is based on this blog post, so yeah, it’s a pretty good read.


The problem I faced

I am currently in the process of writing my bachelor thesis and therefore implemented a prototypical application using the Microservice architectural style. Because I’m a Java guy and know the Spring framework, I decided to implement the application using the Spring Cloud ecosystem. I use an Eureka server as a service registry. Furthermore, I implemented several Spring Boot services and was able to register them to Eureka with the use of an annotation and a little bit of configuration.

It turned out that I had to implement one of the services making up my application with PHP (yikes!) because a library I had to use is not available in Java. Because I only had two weeks for the implementation of my prototype I certainly wouldn’t have been able to write a Java implementation of the library. Therefore I decided to create a PHP microservice with the help of Lumen

Furthermore I didn’t want to miss out on the fancy features of my service registry like client-side load-balancing and the decoupling of my service providers from my consumers. After a bit of research I found the documentation of the Eureka HTTP API. I got discouraged at the sight of the XSD I had to implement in my PHP service to register it with Eureka. I really did not want to implement the various REST operations manually into my service because my PHP knowledge is very limited and I have never used Lumen before.

I was on the verge of giving up when I found Spring Cloud Netflix Sidecar. It promised to let me register my service written in PHP with Eureka using one annotation and a little configuration, just like in my other services written with Spring boot.


Spring Cloud Netflix Sidecar

Spring Cloud Netflix Sidecar is a subproject of Spring Cloud Netflix and is inspired by the Netflix Prana project. A sidecar service is basically yet another Spring boot application that runs on the same host as your non-JVM service. It registers itself to your service registry with a defined application name and frequently checks the health of your non-JVM service via a REST call. The sidecar is also able to forward calls from other services of your application. By using a sidecar application you only have to implement minimal changes to your non-JVM application, so it’s also great for legacy projects.


A working example

For you to get the hang of Spring Cloud Netflix Sidecar I created a very minimalistic project consisting of a Eureka server, a Lumen service capable of doing nothing and the corresponding sidecar application. You have to have docker and docker-compose installed to run this example. In order to run the example application, clone the project from our GitHub repository. After that change into its directory and type
docker-compose up -d into your console. This command pulls all necessary images from our DockerHub registries and starts the containers. After everything has started, you can access http://localhost:8761/, which is the Eureka dashboard, and see the lumen service registered.

You can stop the container containing the lumen service by typing docker stop *lumen-service* and the status of the application on your Eureka dashboard should change to DOWN a few seconds later. That is because the sidecar application’s heartbeats are not answered by your lumen service, obviously.


How to set this up

Sadly, sidecar isn’t available in Spring Initializr, so you have to manually add following maven dependency to your Spring Boot application:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-netflix-sidecar</artifactId>
</dependency>

After adding the dependency you can annotate your main class with the @EnableSidecarannotation.

It would not be a proper Spring application if you wouldn’t have to create an application.yml and add all the necessary configuration, so let’s do this.

server:
  port: 5678

spring:
  application:
    name: lumen-service

eureka:
  client:
    serviceUrl:
      defaultZone: ${EUREKA_URI:http://localhost:8761/eureka}
  instance:
    preferIpAddress: true

sidecar:
  port: ${SIDECAR_PORT:8000}
  health-uri: ${SIDECAR_HEALTH_URI:http://localhost:8000/health}

We have to tell the sidecar application on which port to run and what it’s name is. Note that the spring.application.name property is the one getting displayed on your Eureka dashboard. Furthermore we have to tell the application where the registry server is located.

The important configuration are the

  • sidecar.port: This is the port your non-JVM application is listening on.
  • sidecar.health-uri: This is the REST endpoint of your non-JVM where you implemented a health check.

The configured health check REST endpoint should return a JSON document looking like this:

{
   "status": "UP"
}


Implementing such a simple health check in Lumen is pretty easy, just add the following code snippet in your web.php located in the routes folder of your project:

$router->group(['prefix' => 'health'], function () use ($router) {

    $router->get('', function () {
        return response()->json(['status' => 'UP'])
    });

});

And that’s all you have to change in your non-JVM application to get all the advantages of the Spring Cloud ecosystem.


Roundup

In this post I showed you how to include a non-JVM application in your Spring Cloud ecosystem. This can be done by creating a Spring boot application with the sidecar dependency and some configuration and adding a simple health check to your service.


I hope I was able to help some people with this post!

Best regards

Leon

]]>
OpenCV & Tesseract: Android Computer Vision for Dummies https://craftcoders.app/opencv-tesseract-android-computer-vision-for-dummies/ https://craftcoders.app/opencv-tesseract-android-computer-vision-for-dummies/#respond Mon, 06 Aug 2018 13:52:38 +0000 https://craftcoders.app/?p=461 Read More]]> Introduction

I got to know OpenCV as I investigated into computer vision solutions for a little startup called robodev GmbH (see here). There I developed possible ways for real-time object recognition and computation of angle and position of the recognized object. I fell in love with C++ and its possibilities to realize high-performance applications. Awhile I started digging further into Android development in Java. Due to my experience as a mobile (games) producer at Goodgame Studios, I’m driven by the possibilities and potential of mobile solutions, so plenty of awesome app-ideas get born in my brain. Now is the time I make my ideas come to life.

Come to life

I wrote my first blog post about how I got into programming (see here) and how you can do that too, even when you’re not from the IT-field. In this post, we will play around with OpenCV in Android. Since I didn’t have a small use-case in mind for the vast feature-list of OpenCV I’ll keep that part simple.

For my first app idea, reading business cards could be a neat side-feature, and as I heard that OpenCV and Tesseract are a powerful team – process pictures with OpenCV and read texts with Tesseract OCR – I saw an excellent opportunity to put both into a thin app.
In this blog post, I will show you how to write that app all by yourself, step by step. You will learn to use C++-code, OpenCV and Tesseract OCR on Android.

What is OpenCV?

OpenCV is an open-source computer vision library. It has a mighty feature-set, starting with simple image processing over object-, face- and feature recognition, 3d-calibration, and visualization up to machine learning and more. There is a big community with scientists, professionals, and hobbyists maintaining a comprehensive documentation – helpful for everyone, including beginners (see here). I was more than happy realizing that it supports Android development, next to C++, C, Python, Scala, Java. Unfortunately, the inbuilt OCR is not known to be very mighty, therefore I decided to combine the strengths of OpenCV’s image processing with another library, called Tesseract (as a Marvel Fan I really like that name).

Loki

What is Tesseract OCR?

Tesseract is a free software, command line program for Linux, Windows, and next to the support of 100+ different languages, it was considered the most accurate open-source OCR available in 2006. In 2011 a brave man called Robert Theis released a fork version called “tess-two” of Tesseracts OCR Android tools, maybe not knowing that just 2 years later no one would work on Tesseract Android Tools anymore (the last commit was made 2013). Thanks to Robert Theis we can enjoy the opensource library up till today, updated, with new features and easy to use. So let’s get to work and get the setup of your Android Studio done:

OpenCV Part

Setup Android NDK

To be able to use OpenCV in its pure form we first should setup Android NDK properly, and yes… we’re gonna write C++-Code in our App 😉 AAAHHH

After creating a new Empty Activity Project use the Android Studio SDK Manager (Tools -> SDK Manager -> SDK Tools) for installing the necessary Tools: NDK, CMake, and LLDB. Mind the location of your SDK though – you’ll need that for the next step. After applying your changes a long download of around 700MB will start – don’t worry, that is as intended.
sdktools_setup

Next time you wake up from sweet dreams maybe the download is over and you can keep working. Now you need to add the location of your Android NDK. Write it into Files -> Project Structure -> SDK Location -> Android NDK location. Yes, you just saw the needed path: it is the SDK location + /ndk-bundle.

Android NDK Location

There you go. Your first step to glory is over. The next stuff is going to be a bit more tricky. OpenCV – here we come!

Setup OpenCV

To setup OpenCV, the first step is to download the latest release of the library from their page: OpenCV Releases (at this point it’s 3.4.2). Unpack the zip, preferably into the Android SDK folder (so you have it all in one place) and import that stuff, as a module, into your project: File -> New -> Import Module. Navigate in the next dialogue that pops up to the java-folder of the just unpacked module, i.e. mine is /home/soeren/Downloads/OpenCV-android-sdk/sdk/java (on Linux). If you’re correct the name of the library will show up and it looks like the following (ignore “Project already….” in that pic):

import module window

Hit “Next” and “OK” as long as they ask you stuff, keep going with the default checkmarks until you finally finish. Now, add the module as a dependency to your app. At File -> Project Structure -> app -> Dependencies -> + -> Module dependency you will find OpenCV shown in a pop-up.

add module dependency

Next step is to correct the build.gradle-file of the OpenCV Library (you will find it right under your app build.gradle-file). Align the fields compileSdkVersion, minSdkVersion, and targetSdkVersion, so they are matching with the build.gradle-file (app). Mine looks like this:
build.gradle (openCVLib…) for initial setup

Now you create a directory in your project src/main/jni. In there you create a file called Android.mk and another called Application.mk – to tell the NDK tool to build native code.

where to put the mk-files

Write the following code into the Android.mk-file:
Android.mk for NDK and OpenCV setup

Write your Android NDK path into NDK_MODULE_PATH and replace the OPENCV_ROOT according to yours. Later we will add the LOCAL_SRC_FILES name, but for now, we can leave it empty. Then modify the Application.mk-file:
Application.mk for NDK setup

Now you can tell your build.gradle-file (app) to compile C++-code to .so-files into the jniLibs folder. Do that by writing this…
build.gradle (app) File for OpenCV setup
…into your build.gradle (app) file, by overwriting the default buildTypes-part. Remember to change the ndkDir value to your own. Don’t hesitate to process the gradle-sync, but if you try to build your app now you will get a wonderful error. Something like Process 'command 'PATHTONDK/ndk-build'' finished with non-zero exit value 2". This cryptic error messages did cost me quite some patience, but don’t worry. This is still as expected because we didn’t give in any LOCAL_SRC_FILES. That’s what we’re going to do now. For the sake of convenience, we’re going to use javah as an external tool. For doing so, navigate to File -> Settings -> Tools -> External Tools -> +. Add the javah tool like this:

javah setup

Fill the parameter the following
* “Program”: the path to your javah installation (with Linux you can find out the path typing which javah into a terminal)
* “Arguments”: -v -jni -d $ModuleFileDir$/src/main/jni $FileClass$
* “Working directory”: $SourcepathEntry$

After successfully setting javah up, write a little test function into your MainActivity:

public static native int test();

Use the javah-tool via right-click on test()-> Android Tools -> javah and you generate a corresponding header file and can find it in your jni-folder. Copy the name of that file into your Android.mk-file to LOCAL_SRC_FILES, mine is called com_example_android_MYAPPLICATION_MainActivity.h, yours will be something close to that, I guess.
Now, run and pray that you didn’t make any path- or copy-paste-mistake. If something is not working and you get a cryptic error-message – check all the paths again, and when that doesn’t help, start over again – been there… done that… can happen to everyone! Just don’t give up. 🙂
If it builds successfully and a “Hello World!” is smiling from your Device-Screen up to you: YEAH!!! You’ve done great so far. The actual fun starts now!

Setup the Camera

To use the camera we’re going to implement some code. Initially we implement our MainActivity and its XML-structure. This is going to be a small button on an empty screen, nothing more. But there is more to come! I prepared a gist for you, where you can simply copy/paste the code into the corresponding files: MainActivity for initial setup (don’t forget to change the package name)

Our next step is to create a new empty Activity for working with the camera, let’s call it CameraActivity. Here we will call the camera and have fun with OpenCV and tess-two later, but for now, we will only call the camera. Copy the code from the following gist into the corresponding files: CameraActivity initial setup.

As you can see, we really should take care of state changes, due to the fact that the camera is pretty demanding to run constantly. New to you will be the onCameraViewStarted, onCameraViewStopped and onCameraFrame-methods. Here you can directly interact with the camera input.
In our MainActivity we will call the CameraActivity, by replacing the // do stuff here with the following code:

Intent intent = new Intent(MainActivity.this, CameraActivity.class);
startActivity(intent);

Well done! You should be able to run the app now. And if you press the button… wait! OpenCV Error Msg

What is this? Ahhh!! We forgot the permissions, can it be it’s the permissions?

Get the permissions for OpenCV

So for getting the permissions, we will check and if necessary ask for them in the MainActivity, when the user presses the button. Modify the MainActivity with all the uncommented code at their proper position (as shown with the comments): Permissions in MainActivity

The last step to get the permissions is to touch the AndroidManifest.xml: Manifest for camera permissions

Okay, let’s see if this was the mistake and run the app…Asking for permissions seems to be working but…Package not found

Oh come on! Nooooo!! Press “No”! We don’t need that ominous “OpenCV Manager”! We already have the whole library inside our app.

oh come one!

Ok ok…calm down. We can easily correct the rude OpenCV behavior:

Ignore OpenCVManager

So one of the most useless features of OpenCV is, that it checks if the device it’s running on has the OpenCV library installed, and if not, it won’t work until the OpenCV Manager App is installed. Since we put the whole library into the actual app, we don’t need that anymore, and yet, due to the fact that the library isn’t loaded completely when it checks it causes this message. With the following gist, we override the onPackageInstall-method of the BaseLoaderCallback class to prevent the app to trigger the installation. Add the following uncommented code into your CameraActivity right into the BaseLoaderCallback class:
BaseLoaderCallback-class override the onPackageInstall-method

Now run the app again. If you didn’t mix up some code it should finally look like this:

First cam shot

Good job! But no time to celebrate: Somehow the camera orientation and size makes you feel a bit dizzy, right? We are getting our hands dirty and change something right inside the OpenCV library to fix this one. Find the class CameraBridgeViewBase here:

CameraBridgeViewBase Location

Look for the method deliverAndDrawFrame and replace it with the following code: Adjusting rotation and scale of the camera

Yeah! How wonderfully straight it looks like this, right? It somehow feels like the first time we see stuff through our camera. If your camera view seems a bit too small or big, you can simply change the mScale value accordingly. If you made it till here, congratulations! Figuring out the whole procedure till here cost me around 2 days of my life! I’m glad to make sacrifices for you, though 😉 The next part is going to be much easier!

Use the OpenCV library for image processing

Let’s start putting some functionality into our app and add some more buttons to our activity_camera.xml and corresponding listeners to our CameraActivity:
Adding function buttons, listeners and call OpenCvMaker methods

Don’t mind the errors yet, we will get to those now. Create a Java class with the name OpenCvMaker.java and add the following code (and again, remember to change the package name):
Create an OpenCV manager class

Now delete the previously generated header file, from as you set up the native test() method with the javah tool.

Generated header file

Navigate to your OpenCvMaker class, use the javah tool on one of the native methods (i.e. makeGray(long......)) with right click -> Android Tools -> javah. A new header file will get generated. Go to this file and make the following changes:
Add methods to the header

Then, inside of the jni-folder, create a .cpp-file (without header) and call it the same as your header and add the following code to it:
Implementation of OpenCV methods in C++

And last but not least, copy the name of this .cpp-file (including the “.cpp“!!) and set it as your LOCAL_SRC_FILES inside your Android.mk-file.

Run your app and…tadaaa! Now you can play around with four of the most common functions of OpenCV. Well done!

celebrations

You just learned how to use C++-code AND OpenCV functionality inside your Android app. If this is all you wanted to learn, give yourself a break and lean back. Give your beloved ones a hug and tell them, you did it! Also, give yourself a break when this took you too long and your computer is smoking from all the building processes. In times like these, during the hot summer, I prefer to put my laptop into the fridge to give it a little chill 🙂
my computer in the fridge

Tess-Two Part

Now that we had a wonderful bath in a jacuzzi full of self-esteem, we can go back to work. There is one little part missing: The OCR! Don’t worry, I will make it quick.
First, add the following dependency to your build.gradle (app)-file and trigger the gradle-sync:

implementation 'com.rmtheis:tess-two:9.0.0'

Tess-two is now available and ready to use. Add the HUD components to activity_camera.xml and some functionality to CameraActivity.java with the following modifications:
Read-button and textview for CameraActivity

Create a new Java class with the name MyTessOCR.java and put the following code into it (don’t forget to change the package!):
Implementing the tess-two class

Let’s use it in the CameraActivity, and make some changes in the CameraActivity. Since computation of the OCR could be process-demanding, we will call it in an extra thread. Also, you will be asked to import the android.graphics.Matrix, continue and confirm. This is only for rotating the input signal of the camera according to the preview, that we’ve rotated inside the OpenCV library already. Do the following: Call MyTessOCR class from CameraActivity

Tesseract, and therefore tess-two, needs traineddata to know how and what to recognize. Since we are using tess-two, which is a fork of an older Tesseract version, we cannot just use any traineddata that is available in the tesseract git repository. Download the following traineddata and you will be fine: Download eng.traineddata. Create an asset-folder via right-click on Java -> New -> Folder -> Asset Folder, then create a directory called tessdata and finally copy the downloaded eng.traineddata-file into that directory. When you’re done it should like this:

traineddate location

Next, you need to make sure that this file is also put onto the device. Therefore we need to add a little method to our CameraActivity and call it in the onCreate method: Add preprateTessData-method to CameraActivity

Alright! The last thing to do is to get the permissions for read and write access on the external storage since we are doing that with our previously added method. Go and add the following checks and constants to the MainActivity and additional permissions to the AndroidManifest.xml:
Ask for external storage read and write permissions

Okay! Breath in. Breath out. You did it! Build (can take a while), run and play 🙂 Hold your phone in front of the text below and press READ IT! in your app.

Congratulations

Find my original Git repo of the app here

]]>
https://craftcoders.app/opencv-tesseract-android-computer-vision-for-dummies/feed/ 0
A quick Introduction to Jib https://craftcoders.app/jib-introduction/ https://craftcoders.app/jib-introduction/#respond Mon, 16 Jul 2018 19:34:46 +0000 https://craftcoders.app/?p=383 Read More]]> Hey guys, last week I stumbled upon an announcement on the Google Cloud Platform. They presented a new open-source tool which aims to ease the containerization of your Java project. Without even reading the whole announcement, I declared it the topic of my next blog post. Well, here it is 🙂

What is Jib?

Jib is a containerizer for Java applications you can seamlessly integrate into your favourite build management tool (given it is either Maven or Gradle). The essence of the functionality is pretty simple: It abstracts most of the steps of packaging your application into a Docker image and pushing it to a registry. As a result you don’t have to write Dockerfiles and besides that you don’t even have to have Docker installed on your computer, which is pretty cool.

several steps are needed to push your image to a registry

Build flow without Jib

Jib handles all intermediary steps between building your application and publishing a docker image to a registry

build flow with jib

Why should you use Jib?

Google promises three major points that improve your development process if you use Jib:

  1. Simple
  2. Fast
  3. Reproducible

I thought this to be very interesting, as I have been struggling with writing Dockerfiles ever since I started using Docker. I decided to take it easy at first and start out with trying out Jib on a simple Hello World application, since I was a bit cautious because it is still in the beta phase (Version 0.9.6 as of now). My precautions turned out to be partly justified, due to the (to me) confusing credential management.

All examples shown in this post are done with the Maven plugin, the steps and configuration parameters are similar for gradle though.

Hello World

Alright, let’s containerize a simple Hello World application! I’m gonna spare you the Hello World code and jump right into the pom.xml.

<build>
    <plugins>
      ...
        <plugin>
            <groupId>com.google.cloud.tools</groupId>
            <artifactId>jib-maven-plugin</artifactId>
            <version>0.9.6</version>
            <configuration>
                <to>
                    
                    <credHelper>wincred</credHelper>
                </to>
                <container>
                    <mainClass>jib.demo.HelloWorld</mainClass>
                </container>
            </configuration>
        </plugin>
        ...
    </plugins>
</build>

As you can see Jib is a simple maven plugin with, in this case, the configuration optionsxml<to> and xml<container>. xml<to> specifies the registry you want your image automatically pushed to and your preferred credential helper. xml<container> lets you configure your image to your likings, just like you would in a normal Dockerfile. You can set the main class, some jvm flags, arguments and the ports you want your container to expose. For a more detailed documentation of the configuration options visit the extended usage part of the jib documentation.

Credential management

I composed this configuration rather quick and as I was already using the wincred credential helper on my Windows 10 laptop and was logged into my DockerHub account, I was certain that it was going to work out of the box. Sadly, it didn’t ?. It didn’t work because I wasn’t aware that my credential helper had stored my credentials for the official DockerHub API Url “https://index.docker.io/v1/”. You have to store the credentials for the exact URL you specified in the xml<image> tag of your pom.xml, so in my case “registry.hub.docker.com”. Hence, it doesn’t work when you specify a protocol either, as I tried out with the registry URL “https://registry.hub.docker.com”.

This confused me for longer than it should have, because at first I didn’t even know how to store new credentials in the credential helper. Furthermore, there are no examples in the documentation of the plugin nor on the documentation of the credential-helper itself. Eventually, I was able to store the credentials for the example above in the docker-credentials-wincred using the Windows PowerShell with following command:

type .\credentials.txt | .\docker-credential-wincred.exe store

The content of credentials.txt is

{ 
    "ServerURL": "registry.hub.docker.com",
    "Username": "craftcodecrew", 
    "Secret": {my-super-secret-password} 
}

The type command is similar to cat on Linux systems, so the command pipes the contents of the credentials.txt file into the standard input. The docker-credential-wincred application reads from and stores a new credential into the application.
I could have just put the credentials in the settings.xml in my .m2 folder, but I just had to get it done with the credential helper. I realize that this is probably quite a rookie mistake, but Jib is supposed to be especially designed for Java Developers who are not very experienced with containerization.
Therefore, I would have appreciated a little more documentation for the credential management either on the plugin or the credential management website. Hopefully my struggle with the credentials helps you getting your configuration done faster than me.

After I figured this out, the build process worked smoothly and I was able to push my containerized Hello World application effortlessly to DockerHub with the following command

mvn compile jib:build

If this command is too long for you to type you can also bind the Jib containerization to a Maven lifecycle of your liking, for example package. You just have to add a <execution> tag to the plugin definition of Jib

<plugin>
  <groupId>com.google.com.tools</groupId>
  <artifactId>jib-maven-plugin</artifactId>
  ...
  <executions>
    <execution>
      <phase>package</phase>
      <goals>
        <goal>build</goal>
      </goals>
    </execution>
  </executions>
</plugin>

Now you can build a jar, make a container image out of it and finally push it to a docker registry by simply typing mvn package. If that isn’t convenient I don’t know what is. You can try out my awesome Hello world container by typing the command

docker pull craftcodecrew/jibdemo
docker run craftcodecrew/jibdemo

The example code is available in our GitHub organization.

Conclusion

Coming back to the three promises Google made, I can confirm all three of them as confidently as someone who containerized a Hello World application with it. The only real struggle was the fumbling around with the credentials and the credential helper, but that was only partly the “fault” of the Jib plugin. I will try to set up Jib for more complex applications in the near future, propably on some Spring Boot applications, so stay tuned for an update on this post.

Cheers,
Leon

]]>
https://craftcoders.app/jib-introduction/feed/ 0
Getting Started with Telegrams AbilityBot https://craftcoders.app/getting-started-with-the-telegram-abilitybot/ https://craftcoders.app/getting-started-with-the-telegram-abilitybot/#respond Mon, 18 Jun 2018 08:00:43 +0000 https://billigeplaetze.com/?p=46 Read More]]> This getting started is for developers who want to create a telegram chatbot with Java. There are many articles on the internet explaining how to create a chatbot based on the so-called LongPollingBot. But development works much faster and easier with the AbilityBot, which will be our topic for today.

In the course of the article we are going to build a simple chatbot that reminds its users every two days to work out. The result will look something like this:

bot chat result

Even with this simple example, we can take a look at a bunch of great features provided by the Telegram API and more specific via AbilityBot. In a nutshell, those features are:

  • Replying to commands like /start
  • Using inline keyboards like in the picture above
  • making use of the embedded database in ability bot
  • and as an extra: Repeated execution of tasks at a specific time

If you don’t want to work yourself through the whole article, feel free to take a look at the finished code on github.

Okay then, let’s start coding!

Project setup

For the AbilityBot to work, you need to set up a Maven project in a Java 8 environment. In case you have never done that before, head over to Maven’s getting started and setup a new maven project. In your pom.xml add the following two dependencies:

<dependencies>
        <dependency>
                <groupId>org.telegram</groupId>
                <artifactId>telegrambots</artifactId>
                <version>3.6.1</version>
        </dependency>

        <dependency>
                <groupId>org.telegram</groupId>
                <artifactId>telegrambots-abilities</artifactId>
                <version>3.6.1</version>
        </dependency>
</dependencies>

Ensure that you are using at least Java 8 by adding the following build plugin to your pom:

<build>
        <plugins>
                <plugin>
                        <groupId>org.apache.maven.plugins</groupId>
                        <artifactId>maven-compiler-plugin</artifactId>
                        <configuration>
                                <source>8</source>
                                <target>8</target>
                        </configuration>
                </plugin>
        </plugins>
</build>

Getting your bot to talk

At first, we create our bot class called FitnessBot. It extends the class AbilityBot thus we need to call super() on the constructor and override the method creatorId(). Furthermore we added another constructor without any arguments, thus we can easily instantiate our bot.

public class FitnessBot extends AbilityBot {

    public FitnessBot() {
        this(Constants.BOT_TOKEN, Constants.BOT_USERNAME);
    }

    private FitnessBot(String botToken, String botUsername) {
        super(botToken, botUsername);
    }

    @Override
    public int creatorId() {
        return Constants.CREATOR_ID;
    }
}

At the latest when you paste this code into your IDE, you will recognize the unresolved reference to an interface called Constants. It is a common software pattern widely used in Android for example. The interface contains all the constant values needed by your application to function properly. Later we will use it for all our text responses, database identifiers, and similar stuff. In order to fix your code for now, create a new interface and paste the following lines.

public interface Constants {
    // Initialization
    String BOT_USERNAME = "FitnessBot";
    String BOT_TOKEN = "your-super-secret-token";
    int CREATOR_ID = your-telegram-user-id;
}

You need to replace the information with the ones @BotFather gave to you. If you don’t know what I’m talking about please refer to this tutorial. It contains information on how to create a new Telegram bot.

Running your bot

Before you can run your bot you have to initialize it. Therefore we create a new class called Application and fill it with a main() method containing the code to initialize your bot. This block is copied from the Github documentation of ability bot.

public class Application {

    public static void main(String[] args) {
        // Initializes dependencies necessary for the base bot
        ApiContextInitializer.init();

        // Create the TelegramBotsApi object to register your bots
        TelegramBotsApi botsApi = new TelegramBotsApi();

        try {
            // Register your newly created AbilityBot
            FitnessBot bot = new FitnessBot();
            botsApi.registerBot(bot);

        } catch (TelegramApiException e) {
            e.printStackTrace();
        }
    }
}

Now you can run your bot but he won’t answer, yet. In the next step, we change this by developing the first ability: A response to the command /start. Therefore, you have to add the following method to your FitnessBot class.

public Ability replyToStart() {
    return Ability
        .builder()
        .name("start")
        .info(Constants.START_DESCRIPTION)
        .locality(ALL)
        .privacy(PUBLIC)
        .action(ctx ->  silent.send("Hello World!", ctx.chatId()))
        .build();
}

In Constants add:

String START_DESCRIPTION = "Start using the fitness bot to remind you doing sports";

I don’t want to go into detail on the specific methods chained here because they are pretty well explained in the already linked documentation.

But here are some information explaining action() more thoroughly. ctx is the message context and provides related information like the chatId of the message. To reply to a message you have to call a method on either object silent or sender. Both are provided by the parent class AbilityBot. As you can see in our code, currently we are using the silent object. The difference is that while silent is used to send plain text messages only, sender gives you more freedom on how to compose your reply. As you will learn later, inline keyboards, for example, need to use sender. The downside of sender is that more freedom leads to more responsibility. Exceptions might occour and you have to handle them by yourself.

Database usage and state handling

State handling and properly saving that state to a database can get pretty complex. So it makes sense to create a separate class for that concern. Therefore we build a new class ResponseHandler which will be responsible for processing requests:

public class ResponseHandler {
    private final MessageSender sender;
    private final Map<Long, State> chatStates;

    public ResponseHandler(MessageSender sender, DBContext db) {
        this.sender = sender;
        chatStates = db.getMap(Constants.CHAT_STATES);
    }
}

In it’s constructor, the ResponseHandler receives the database context called db. The instance, which is going to be created using the constructor, will become a field in our FitnessBot class. Field sender will be used to send messages back to the user. As you can see, the state of the bot is saved separately for each chat in a Map called chatStates.

The next step is to add a new entry to our Constants interface:

String CHAT_STATES = "CHAT_STATES"

This is the name of the table created internally by the embedded database. Pretty simple. You can read and even update the map chatStates as you wish and it will be synced with the database automagically.

magic gif

If you wondered about State-class: This is an enum containing our set of states for the bot. Currently, just a single one which says that we are waiting for the user to reply.

public enum State {
    AWAITING_TRAINING_DAY
}

To make use of the whole new code we need to initialize our ResponseHandler in the bot class. We do that using a new field inside of FitnessBot.

private final ResponseHandler responseHandler;

public FitnessBot(String botToken, String botUsername) {
        super(botToken, botUsername);
        responseHandler = new ResponseHandler(sender, db);
    }

Now we need to replace the action() method in our replyToStart() ability with

.action(ctx ->  responseHandler.replyToStart(ctx.chatId()))

But our ResponseHandler doesn’t have the referenced method, so far. You can find it below.

public void replyToStart(long chatId) {
        try {
                sender.execute(new SendMessage()
                        .setText(Constants.START_REPLY)
                        .setChatId(chatId));
                chatStates.put(chatId, State.AWAITING_TRAINING_DAY);
        } catch (TelegramApiException e) {
                e.printStackTrace();
        }
}

And our text is saved in Constants:

String START_REPLY = "Welcome I'm FitnessBot. I'm here to remind you doing sports every second day!";

If you run the application now and issue /start our newly added text will appear!

Using inline keyboards

Chances are that you will use more than one inline keyboard. That’s why we create a new class called KeyboardFactory. This class will create the needed keyboard instances for us. The code is pretty self-explanatory but see for yourself.

public class KeyboardFactory {
    public static ReplyKeyboard withTodayTomorrowButtons() {
        InlineKeyboardMarkup inlineKeyboard = new InlineKeyboardMarkup();
        List<List<InlineKeyboardButton>> rowsInline = new ArrayList<>();
        List<InlineKeyboardButton> rowInline = new ArrayList<>();
        rowInline.add(new InlineKeyboardButton().setText(Constants.TRAINING_TODAY).setCallbackData(Constants.TRAINING_TODAY));
        rowInline.add(new InlineKeyboardButton().setText(Constants.TRAINING_TOMORROW).setCallbackData(Constants.TRAINING_TOMORROW));
        rowsInline.add(rowInline);
        inlineKeyboard.setKeyboard(rowsInline);
        return inlineKeyboard;
    }
}

And in our Constants file we add:

String TRAINING_TODAY = "Today";
String TRAINING_TOMORROW = "Tomorrow";
String FIND_TRAINING_DATE = "Do you want to have a workout today or tomorrow?";

Now we can just call the static method of our factory to make use of an inline keyboard. The most important code part is setCallbackData(). It defines an identifier to recognize which button has been clicked by the user. Hint: In a real world application it might not be too smart to use the button text as identifier for a callback but we use it here to simplify the code.

Now we need to use the keyboard in our response by adding another sender.execute() below the first one which we already defined.

public void replyToStart(long chatId) {
        try {
            sender.execute(new SendMessage()
                .setText(Constants.START_REPLY)
                .setChatId(chatId));

            sender.execute(new SendMessage()
                .setText(Constants.FIND_TRAINING_DATE)
                .setChatId(chatId)
                .setReplyMarkup(KeyboardFactory.withTodayTomorrowButtons()));

            chatStates.put(chatId, State.AWAITING_TRAINING_DAY);

        } catch (TelegramApiException e) {
                e.printStackTrace();
        }
}

Give it a try! Our bot will now offer an inline keyboard to answer the training day question.

Inline keyboard interactions

So far we aren’t able to recognize clicks on the two buttons. But we can give our FitnessBot a new ability to do so! Similar to the image reply example given by Telegram we can filter for button responses via Flag.CALLBACK_QUERY. All the identifiers of clicked buttons will be sent inside of the update object upd. This object contains a lot of data but luckily there are some helper methods to extract the important information conveniently. getChatId(upd) will find the chat id of the update for you and via AbilityUtils.getUser(upd) you can get the author of the message.

Add the following lines to your bot class.

public Reply replyToButtons() {
        Consumer<Update> action = upd -> responseHandler.replyToButtons(getChatId(upd), upd.getCallbackQuery().getData());
        return Reply.of(action, Flag.CALLBACK_QUERY);
}

The tricky part here is the implementation of replyToButtons() inside of responseHandler because every single button click will be processed in this method. That’s why we use it for separation of concerns only.

public void replyToButtons(long chatId, String buttonId) {
        try {
                switch (buttonId) {
                        case Constants.TRAINING_TODAY:
                                replyToTrainingToday(chatId);
                                break;
                        case Constants.TRAINING_TOMORROW:
                                replyToTrainingTomorrow(chatId);
                                break;
                }
        } catch (TelegramApiException e) {
                e.printStackTrace();
        }
}

The actual logic resides in the referenced methods. First, it validates whether the bot is currently in a state where it is waiting for a response. Only if this is the case, the button is clicked. Depending on which button has been pressed, the bot is moved to another state.

private void replyToTrainingToday(long chatId) throws TelegramApiException {
        if (chatStates.get(chatId).equals(State.AWAITING_TRAINING_DAY)) {
                sender.execute(new SendMessage()
                        .setText(Constants.TRAINING_TODAY_REPLY)
                        .setChatId(chatId));
                chatStates.put(chatId, State.TODAY_IS_TRAINING_DAY);
        }
}

private void replyToTrainingTomorrow(long chatId) throws TelegramApiException {
        if (chatStates.get(chatId).equals(State.AWAITING_TRAINING_DAY)) {
                sender.execute(new SendMessage()
                        .setText(Constants.TRAINING_TOMORROW_REPLY)
                        .setChatId(chatId));
                chatStates.put(chatId, State.TODAY_IS_RELAX_DAY);
        }
}

The two referenced states are new thus have to be added to our enum:

public enum State {
    AWAITING_TRAINING_DAY, TODAY_IS_TRAINING_DAY, TODAY_IS_RELAX_DAY
}

Depending on the clicked button we want to answer something different. As usual, we need to add the answers to Constants:

String TRAINING_TODAY_REPLY = "Okay then take this as a reminder ;)";
String TRAINING_TOMORROW_REPLY = "Okay I'll remind you tomorrow at nine o'clock!";

Extra: Scheduled task execution

Now we need a way to remind our users to do sports on their training days. For that, we make use of ScheduledExecutorService from Java’s concurrent package. We’re not going to go into detail here, as it is not part of the bot framework. You just need to know that with the class below we have an easy way to execute a task at a specific time on a daily basis. E.g. to write a message at 9am in the morning. Create a new class called DailyTaskExecutor:

public class DailyTaskExecutor {
    private final ScheduledExecutorService executorService;
    private final DailyTask dailyTask;

    public DailyTaskExecutor(DailyTask dailyTask) {
        this.executorService = Executors.newScheduledThreadPool(1);
        this.dailyTask = dailyTask;
    }

    public void startExecutionAt(int targetHour, int targetMin, int targetSec) {
        Runnable taskWrapper = () -> {
            dailyTask.execute();
            startExecutionAt(targetHour, targetMin, targetSec);
        };
        long delay = computeNextDelay(targetHour, targetMin, targetSec);
        executorService.schedule(taskWrapper, delay, TimeUnit.SECONDS);
    }

    private long computeNextDelay(int targetHour, int targetMin, int targetSec) {
        LocalDateTime localNow = LocalDateTime.now();
        ZoneId currentZone = ZoneId.systemDefault();
        ZonedDateTime zonedNow = ZonedDateTime.of(localNow, currentZone);
        ZonedDateTime zonedNextTarget = zonedNow.withHour(targetHour).withMinute(targetMin).withSecond(targetSec);
        if(zonedNow.compareTo(zonedNextTarget) >= 0)
            zonedNextTarget = zonedNextTarget.plusDays(1);

        Duration duration = Duration.between(zonedNow, zonedNextTarget);
        return duration.getSeconds();
    }

    public void stop() {
        executorService.shutdown();
        try {
            executorService.awaitTermination(1, TimeUnit.DAYS);
        } catch (InterruptedException ex) {
            Logger.getLogger(DailyTaskExecutor.class.getName()).log(Level.SEVERE, null, ex);
        }
    }
}

DailyTask is an interface with an execute() method. We have to create an implementation of this interface later.

public interface DailyTask {
    void execute();
}

But first, let’s create an instance of DailyTaskExecutor in our FitnessBot class as a new field. As soon as we have our instance we schedule a new task which will run every morning at 9 am using startExecutionAt().

    private final DailyTaskExecutor dailyTaskExecutor;

    private FitnessBot(String botToken, String botUsername) {
        super(botToken, botUsername);
        responseHandler = new ResponseHandler(sender, db);
        dailyTaskExecutor = new DailyTaskExecutor(new MorningReminderTask(this));
        dailyTaskExecutor.startExecutionAt(9, 0, 0);
    }

Okay, now we need a new class for our DailyTask interface. We call it MorningReminderTask and it takes a callback listener as a constructor parameter. This callback listener is part of the class itself and gets called as soon as execute() will be run.

public class MorningReminderTask implements DailyTask {

    public interface Callback {
        void onTimeForMorningTask();
    }

    private final Callback callback;

    public MorningReminderTask(Callback callback) {
        this.callback = callback;
    }

    @Override
    public void execute() {
        callback.onTimeForMorningTask();
    }
}

Now you should have a compile error in your bot class because you try to pass your bot as a reference for callback in the constructor of MorningReminderTask. Before you can do that you need to implement MorningReminderTask.Callback in your bot.

public class FitnessBot extends AbilityBot implements MorningReminderTask.Callback {

    @Override
    public void onTimeForMorningTask() {
        responseHandler.sayMorningMessages();
    }
}

Due to that interface, you need to override onTimeForMorningTask() method. As our response handler class is taking care of writing messages we just call one of its methods. Which we have to create now:

public void sayMorningMessages() {
        try {
            for (long chatId : chatStates.keySet()) {
                switch (chatStates.get(chatId)) {
                    case TODAY_IS_TRAINING_DAY:
                        processTrainingDay(chatId);
                        break;
                    case TODAY_IS_RELAX_DAY:
                        processRelaxDay(chatId);
                        break;
                }
            }
        } catch (TelegramApiException e) {
            e.printStackTrace();
        }
    }

    private void processTrainingDay(long chatId) throws TelegramApiException {
        sender.execute(new SendMessage()
                        .setText(Constants.TRAINING_REMINDER)
                        .setChatId(chatId));
        chatStates.put(chatId, State.TODAY_IS_RELAX_DAY);
    }

    private void processRelaxDay(long chatId) {
        chatStates.put(chatId, State.TODAY_IS_TRAINING_DAY);
    }

In the constants file:

String TRAINING_REMINDER = "Good Morning! Don't forget to do sports, today.";

For each chat, we will now send a reminder message if we are in TODAY_IS_TRAINING_DAY state. Furthermore, we are changing the state to the opposite, so every second day will be a training day.

And that’s it. Now your bot is able to remind you every second day to do sports. Feel free to leave a comment if you have any questions or suggestions. You can find the whole code of this getting started on our github profile.

Greets,
Domi

]]>
https://craftcoders.app/getting-started-with-the-telegram-abilitybot/feed/ 0