Showing posts with label akka. Show all posts
Showing posts with label akka. Show all posts

Sunday, July 2, 2017

Cinnamon: Way For Monitoring & Metrics Generation for Akka ActorSystem.

We are developing huge applications and deployed on multiple virtual machines and clusters. For monitoring applications, we need to enable logs into our application and analysis that logs with help of some tools like elastic stack.
But !! what if we need to check health of our application on virtual machines and clusters? For that, we are using several Metrics for system health check like Gauges, Histograms and more.

Lightbend Telemetry gives us one of the way for metrics generation and monitoring systems (application) by using Cinnamon plugins. Today's we look into Cinnamon for monitoring akka ActorSytem. Configuring cinnamon is not a rocket science, there are simple steps, which, we are going to define here.

The first step for using cinnamon is, we need to create and account on lightbend, from where we can download credentials and paste into our home directory(for linux users). All instructions are define in this link.

Note: For today's example, we are using sbt project, but we can easily integrate with maven and gradle as well.

>>> We need to add cinnamon plugin in our plugin.sbt file as below:


>>> Now, require to add some dependencies to add in our build.sbt:


>>> Add cinnamon configuration on application.conf as below:


There are lots of option for configure monitoring to actor system, for more information please click on this link.

Example: 

My example is a simple hello world actor application, but when we run that example, in console we are seeing cinnamon metrics. The example as below:


For running the application we need to execute sbt run command as below:


As in logs, we have seeing, 4 types of metrics are there. Every metrics have its own befits and analysis. These metrics gives us a report on akka actor systems like actors count, threads count, mailbox capacity and more.

For more examples, you can check github repo.

References:



Saturday, March 25, 2017

Scala Trait and Mixin - Points to Remember

  • Trait can be viewed not only as interfaces in other languages, but also as classes with only parameterless constructor.
  • Whenever there is some code in trait, the trait is called mixin
  • trait Alarm {
     def trigger(): String
    }  
  • In scala trait, If we create simple class and pass parameter in constructor, without any val or var, it means, that scope of variable is within constructor only, if we specify val or var, it means, the compile create getter for variables.
  • If one trait have variable and not initialize with some value, then when we mix with some class, we need to declare that variable in primary constructor with val or var or override in scala otherwise we are getting compile time error.
  • error: class V3 needs to be abstract, since value messages 
    in trait V of type String is not defined
    
  • When we create some variable in trait, it defines as abstract method in bytecode because scala have same name space for variables and method.
  • In class perspective, trait have to implement all their methods and have only one constructor that does not accept any parameter.
  • A trait can extend class also (abstract or concrete) .
  • If some trait is mixed with some class during composing like new class with trait , and that trait extends some class like trait extends SomeClass , this means, compiler will expect that the class must be the subclass of SomeClass otherwise we got compile time error.
  • class SomeClass { . }
    trait xxx extends SomeClass { . }
    new class with xxx 
    
  • If, two traits have same method signature with different return type and mix with class, compiler gives us error:
  • trait V1 { def hello: String }
     trait V2 {  def hello: Int }
     error: overriding method greeting in trait V1 of type => String;
     method greeting in trait V2 of type => Int has incompatible type
    
  • In scala, If we have two independent traits and have same method signature with implementation and mix by some class. That class, no need to override methods, because traits have its own implementation, that time compiler gives us compile time error.
  • trait Independent1 { def freedom = "Free" }
    trait Independent2 { def freedom = "Free" }
    class Prisoner extends Independent1 with Independent2
    
    Prisoner.scala:9: error: class Prisoner inherits conflicting members:
    method freedom in trait Independent1 of type => String  and
    method freedom in trait Independent2 of type => String
    (Note: this can be resolved by declaring an override in class Prisoner.)
    class Prisoner extends Independent1 with Independent2
    
  • In scala, for solving diamond problem, they are using linearization like which mix last in class, there method is called. Multiple inheritance is only possible using traits.
  • class E extends B with C // A -> B, A-> C in this case C method will win.
    
  • In linearization calls starts from right to left, and  if traits have same methods and also call super in method body then call will go like hierarchy.
  • trait A2 {  def string = "" }
    trait B2 extends A2 { override def string = "B String" + super.string }
    trait C2 extends B2 { override def string = "C String" + super.string }
    class MultipleMixinM2 extends B2 with C2
    object MultipleMixinM2 extends App { println(new MultipleMixinM2().string) } //C String B String
    

Thursday, March 23, 2017

Multiways for communicate with Akka Remote Actor

This is my first video tutorials on Akka Remoting. There are multiples combinations for creating Akka Remote actor and communicate with them. In this tutorials, we are discussing 5 ways for communicate with remote actors. The source code for this examples are available https://github.com/harmeetsingh0013/learning-akka-cluster git hub repo. Please send feedback on comments. I hope you guys learning new from this video.



Tuesday, February 7, 2017

Building Microservices Based Enterprise Applications in Java Using Lagom - Part I

As we know, now days, most of the enterprise applications design as a "Microservices Architecture" because of scalability, sharding, loosely-coupling and many others reasons are there. On the other hand  JavaEE help us for building an Enterprise Applications. As we know, Java help us for building a good applications, but with JavaEE monolithic approach our applications are not scalable as compare to microservices.
That's why, Lagom comes into picture. Lagom provides a way for building an Enterprise Application with design of Microservices based architecture and also give us Responsive, Resilient, Elastic and Message Driven features or in other words a Reactive Approach. Lagom design philosophy as :

  1. Asynchronous.
  2. Distributed Persistent.
  3. Developer Productivity.

In this blog, we are building a sample application, for managing user module and performing CRUD on user. 

Note: Lagom gives us strict approach for designing applications with Domain Driven Design (DDD) manner and also follow CQRS for event sourcing.

Step-I: 

We are building a maven based project using Lagom and our project structure is as below: 


Step - II

As we know, Lagom follow DDD approach, so in maven based project we are creating submodules according to our domain. In, sample, we are managing user domain, so, we are creating two maven sub modules as "user-api" and "user-impl". user-api contains just specification and declarations of methods for our rest endpoints and in user-impl we actually implements, implementation of services. 
In user-api, we are creating a class "UserService" and declare our all end points as below code:

public interface UserService extends Service {

    ServiceCall> user(String id);

    ServiceCall newUser();

    ServiceCall updateUser();

    ServiceCall delete(String id);

    ServiceCall> currentState(String id);

    @Override
    default Descriptor descriptor() {

        return named("user").withCalls(
                restCall(GET, "/api/user/:id", this::user),
                restCall(POST, "/api/user", this::newUser),
                restCall(PUT, "/api/user", this::updateUser),
                restCall(DELETE, "/api/user/:id", this::delete),
                restCall(GET, "/api/user/current-state/:id", this::currentState)
        ).withAutoAcl(true);
    }
}

Step III

Now in our user-impl module we are giving implementation of all services. For design a services, Lagom provide us strict model for following DDD. According to DDD we need Entities, Commands, Events and more. Initially we need to define commands for our module as below: 


public interface UserCommand extends Jsonable {

    @Value
    @Builder
    @JsonDeserialize
    final class CreateUser implements UserCommand, PersistentEntity.ReplyType {
        User user;
    }

    @Value
    @Builder
    @JsonDeserialize
    final class UpdateUser implements UserCommand, PersistentEntity.ReplyType {
        User user;
    }

    @Value
    @Builder
    @JsonDeserialize
    final class DeleteUser implements UserCommand, PersistentEntity.ReplyType {
        User user;
    }

    @Immutable
    @JsonDeserialize
    final class UserCurrentState implements UserCommand, PersistentEntity.ReplyType> {}
}
For designing a pojos in Java, I am using Lombok library for creating immutables classes and remove boilerplate code. Lagom also provide us other options as well mention in documentations.

Note: You can use any library, but before using, configure your IDE according to library.

Step IV

We are following CQRS, so we need to define events as well: 

public interface UserEvent extends Jsonable, AggregateEvent {

    @Override
    default AggregateEventTagger aggregateTag() {
        return UserEventTag.INSTANCE;
    }

    @ValueEvent
    @Builder
    @JsonDeserialize
    final class UserCreated implements UserEvent, CompressedJsonable {
        User user;
        String entityId;
    }

    @Value
    @Builder
    @JsonDeserialize
    final class UserUpdated implements UserEvent, CompressedJsonable {
        User user;
        String entityId;
    }

    @Value
    @Builder
    @JsonDeserialize
    final class UserDeleted implements UserEvent, CompressedJsonable {
        User user;
        String entityId;
    }
}

Our events must be in compress form, because we need to persist hole event in db. For more details go through lagom documentation.

Step V

We need to define our entity, and define behaviors according to commands and events as below: 

public class UserEntity extends PersistentEntity {

    @Override
    public Behavior initialBehavior(Optional snapshotState) {

        // initial behaviour of user
        BehaviorBuilder behaviorBuilder = newBehaviorBuilder(
                UserState.builder().user(Optional.empty())
                        .timestamp(LocalDateTime.now().toString()).build()
        );

        behaviorBuilder.setCommandHandler(CreateUser.class, (cmd, ctx) ->
                ctx.thenPersist(UserCreated.builder().user(cmd.getUser())
                        .entityId(entityId()).build(), evt -> ctx.reply(Done.getInstance()))
        );

        behaviorBuilder.setEventHandler(UserCreated.class, evt ->
                UserState.builder().user(Optional.of(evt.getUser()))
                        .timestamp(LocalDateTime.now().toString()).build()
        );

        behaviorBuilder.setCommandHandler(UpdateUser.class, (cmd, ctx) ->
                ctx.thenPersist(UserUpdated.builder().user(cmd.getUser()).entityId(entityId()).build()
                        , evt -> ctx.reply(Done.getInstance()))
        );

        behaviorBuilder.setEventHandler(UserUpdated.class, evt ->
                UserState.builder().user(Optional.of(evt.getUser()))
                        .timestamp(LocalDateTime.now().toString()).build()
        );

        behaviorBuilder.setCommandHandler(DeleteUser.class, (cmd, ctx) ->
                ctx.thenPersist(UserDeleted.builder().user(cmd.getUser()).entityId(entityId()).build(),
                        evt -> ctx.reply(cmd.getUser()))
        );

        behaviorBuilder.setEventHandler(UserDeleted.class, evt ->
                UserState.builder().user(Optional.empty())
                        .timestamp(LocalDateTime.now().toString()).build()
        );

        behaviorBuilder.setReadOnlyCommandHandler(UserCurrentState.class, (cmd, ctx) ->
                ctx.reply(state().getUser())
        );

        return behaviorBuilder.build();
    }
}

By default, Lagom recommendations are using Cassandra for events storing. But Lagom also support RDBMS as well. For more details please click on this link.
Here we define our implementation of commands and events. In simple words, here we decide what will happen with specific command and what will happen with specific event and maintain the state in memory.

Step VI

Sometimes, we also need to persist our data separately as well as events, in that case lagom provide us "ReadSideProcessor" abstract class perform operations according to events happened just like below: 

  
public class UserEventProcessor extends ReadSideProcessor {

    private static final Logger LOGGER = LoggerFactory.getLogger(UserEventProcessor.class);

    private final CassandraSession session;
    private final CassandraReadSide readSide;

    private PreparedStatement writeUsers;
    private PreparedStatement deleteUsers;

    @Inject
    public UserEventProcessor(final CassandraSession session, final CassandraReadSide readSide) {
        this.session = session;
        this.readSide = readSide;
    }

    @Override
    public PSequence> aggregateTags() {
        LOGGER.info(" aggregateTags method ... ");
        return TreePVector.singleton(UserEventTag.INSTANCE);
    }

    @Override
    public ReadSideHandler buildHandler() {
        LOGGER.info(" buildHandler method ... ");
        return readSide.builder("users_offset")
                .setGlobalPrepare(this::createTable)
                .setPrepare(evtTag -> prepareWriteUser()
                        .thenCombine(prepareDeleteUser(), (d1, d2) -> Done.getInstance())
                )
                .setEventHandler(UserCreated.class, this::processPostAdded)
                .setEventHandler(UserUpdated.class, this::processPostUpdated)
                .setEventHandler(UserDeleted.class, this::processPostDeleted)
                .build();
    }

    // Execute only once while application is start
    private CompletionStage createTable() {
        return session.executeCreateTable(
                "CREATE TABLE IF NOT EXISTS users ( " +
                        "id TEXT, name TEXT, age INT, PRIMARY KEY(id))"
        );
    }

    /*
    * START: Prepare statement for insert user values into users table.
    * This is just creation of prepared statement, we will map this statement with our event
    */
    private CompletionStage prepareWriteUser() {
        return session.prepare(
                "INSERT INTO users (id, name, age) VALUES (?, ?, ?)"
        ).thenApply(ps -> {
            setWriteUsers(ps);
            return Done.getInstance();
        });
    }

    private void setWriteUsers(PreparedStatement statement) {
        this.writeUsers = statement;
    }

    // Bind prepare statement while UserCreate event is executed
    private CompletionStage> processPostAdded(UserCreated event) {
        BoundStatement bindWriteUser = writeUsers.bind();
        bindWriteUser.setString("id", event.getUser().getId());
        bindWriteUser.setString("name", event.getUser().getName());
        bindWriteUser.setInt("age", event.getUser().getAge());
        return CassandraReadSide.completedStatements(Arrays.asList(bindWriteUser));
    }
    /* ******************* END ****************************/

    /* START: Prepare statement for update the data in users table.
    * This is just creation of prepared statement, we will map this statement with our event
    */
    private CompletionStage> processPostUpdated(UserUpdated event) {
        BoundStatement bindWriteUser = writeUsers.bind();
        bindWriteUser.setString("id", event.getUser().getId());
        bindWriteUser.setString("name", event.getUser().getName());
        bindWriteUser.setInt("age", event.getUser().getAge());
        return CassandraReadSide.completedStatements(Arrays.asList(bindWriteUser));
    }
    /* ******************* END ****************************/

    /* START: Prepare statement for delete the the user from table.
    * This is just creation of prepared statement, we will map this statement with our event
    */
    private CompletionStage prepareDeleteUser() {
        return session.prepare(
                "DELETE FROM users WHERE id=?"
        ).thenApply(ps -> {
            setDeleteUsers(ps);
            return Done.getInstance();
        });
    }

    private void setDeleteUsers(PreparedStatement deleteUsers) {
        this.deleteUsers = deleteUsers;
    }

    private CompletionStage> processPostDeleted(UserDeleted event) {
        BoundStatement bindWriteUser = deleteUsers.bind();
        bindWriteUser.setString("id", event.getUser().getId());
        return CassandraReadSide.completedStatements(Arrays.asList(bindWriteUser));
    }
    /* ******************* END ****************************/
}

Fore more details, please click on link.

Step VII

Finally we are going to define implementation of our rest service endpoints as below: 


public class UserServiceImpl implements UserService {

    private final PersistentEntityRegistry persistentEntityRegistry;
    private final CassandraSession session;

    @Inject
    public UserServiceImpl(final PersistentEntityRegistry registry, ReadSide readSide, CassandraSession session) {
        this.persistentEntityRegistry = registry;
        this.session = session;

        persistentEntityRegistry.register(UserEntity.class);
        readSide.register(UserEventProcessor.class);
    }

    @Override
    public ServiceCall> user(String id) {
        return request -> {
            CompletionStage> userFuture =
                    session.selectAll("SELECT * FROM users WHERE id = ?", id)
                            .thenApply(rows ->
                                    rows.stream()
                                            .map(row -> User.builder().id(row.getString("id"))
                                                    .name(row.getString("name")).age(row.getInt("age"))
                                                    .build()
                                            )
                                            .findFirst()
                            );
            return userFuture;
        };
    }

    @Override
    public ServiceCall newUser() {
        return user -> {
            PersistentEntityRef ref = userEntityRef(user);
            return ref.ask(CreateUser.builder().user(user).build());
        };
    }

    @Override
    public ServiceCall updateUser() {
        return user -> {
            PersistentEntityRef ref = userEntityRef(user);
            return ref.ask(UpdateUser.builder().user(user).build());
        };
    }

    @Override
    public ServiceCall delete(String id) {
        return request -> {
            User user = User.builder().id(id).build();
            PersistentEntityRef ref = userEntityRef(user);
            return ref.ask(DeleteUser.builder().user(user).build());
        };
    }

    @Override
    public ServiceCall> currentState(String id) {
        return request -> {
            User user = User.builder().id(id).build();
            PersistentEntityRef ref = userEntityRef(user);
            return ref.ask(new UserCurrentState());
        };
    }

    private PersistentEntityRef userEntityRef(User user) {
        return persistentEntityRegistry.refFor(UserEntity.class, user.getId());
    }
}

There are still lots of things Lagom provide us, for developing enterprise applications. Which we will cover in our next blog.

Full source of this example, please click on link.  

Wednesday, January 25, 2017

Short Interview With SMACK Tech Stack !!!

Hello guy's, today's we conduct short interview with SMACK about its architecture and there uses. Let's start with of some introduction.

Interviewer: How would you describe your self ?
SMACK: I am SMACK (Spark, Mesos, Akka, Cassandra and Kafka) and belongs to all open source technologies. Mesosphere and Cisco collaboration bundles these technologies together and create a product called Infinity.  Which is used to solved pipeline data challenges where the speed of response is matters like fraud detection system.

Interviewer: Why SMACK ?
SMACK: Now day's modern data-processing challenges are :
  • Data is getting bigger or more accurately, the number of data source is increasing.
  • Today, many modern business models data from one hour ago is practically obsolete.
  • Data analysis becomes to slow to get any return on investment info.
  • One modern requirement is to have horizontal scaling with low cost.
  • We live in a age where data freshness matters many times more than the amount or size of data.
There are many challenges we are facing, SMACK exist because one technology doesn't make an architecture. SMACK is a pipelined architecture model for data processing.

Interviewer: Lambda Architecture is data processing architecture and have advantage of both batch and stream processing methods, So, how SMACK different ?
SMACK: Yes, Lambda Architecture have these features, but most of the lambdas solutions cannot meet two needs at the same time :
  1. Handles a massive data stream in real time.
  2. Handles multiple and different data models from multiple data source.
For these. Apache Spark is responsible for real time analysis for both historical and recent and from massive information torrent and all such information and analysis results are persisted in Apache Cassandra. So, in the case of failure we can recover the real time data from any point of time. With lambda Architecture it's not always possible.

Interviewer: SMACK can you briefly describe about you technologies ?
SMACK: Yes sure, as we discussed SMACK is basically used for Pipeline data architecture for online data stream processing. There are lots of books and articles are available on each and every technology but we are using every technology for some specific purpose like:
  • Apache Spark: Processing Engine.
  • Akka: The Model.
  • Apache Kafka: The Broker.
  • Apache Cassandra: The Storage.
  • Apache Mesos: The Container.
See, all are Apache projects with the exception of Akka.

Interviewer: Is, SMACK is only solution ?
SMACK: No, You can replace individual components as per you requirements like Yarn could be used as the cluster scheduler instead of Mesos and Apache Flink would be suitable batch and stream processing alternatives to Akka. There are many alternatives to SMACK.

Interviewer: Could you discuss one of your case study with us ?
SMACK: Yes, But not know. I need to go with my technologies for some hangout, that we will discuss further in our next interview.

Interviewer: Can I take one picture of you ?
SMACK: Yes sure, cheeezzzz .....


References: