Sunday, August 26, 2018

A quick catch up before Java 11

Java 11's release candidate is already here, and the industry is still roaming around Java 8. Every six months, we will see a new release. It is good that Java is evolving at a fast speed to catch up the challengers, but at the same time, it is also scary to catch its speed, even the Java ecosystem (build tools, IDE, etc.) is not catching up that fast. It feels like we are losing track. If I can't catch up with my favorite language, then I will probably choose another one, as it is equally as good to adapt to the new one. Below, we will discuss some of the useful features from Java 8, 9 and 10 that you need to know before jumping into Java 11. 

Before Java 8? Too Late!

Anyone before Java 8? Unfortunately, you will need to consider yourself out of the scope of this discussion — you are too late. If you want to learn what's new after Java 7, then Java is just like any new language for you!

Java 8: A Tradition Shift

Java 8 was released four years ago. Everything that was new in Java 8 has become quite old now. The good thing is that it will still be supported for some time in parallel to the future versions. However, Oracle is already planning to make its support a paid one, as it is the most used and preferred version to date. Java 8 was a tradition shift, which made the Java useful for today and future applications. If you need to talk to a developer today, you can't just keep talking about OOP concepts — this is the age of JavaScript, Scala, and Kotlin, and you must know the language of expressions, streams, and functional interfaces. Java 8 came with these functional features, which kept Java in the mainstream. These functional features will stay valuable amongst its functional rivals, Scala and JavaScript.

A Quick Recap

Lambda Expression: (parameters) -> {body}

Lambda expressions opened the gates for the functional programming lovers to keep using Java. Lambda expressions expect zero or more parameters, which can be accessed in the expression body and returned with the evaluated result. 
Comparator<Integer> comparator = (a, b) -> a-b;
System.out.println(comparator.compare(3, 4)); // -1

Functional Interface: an Interface With Only one Method

The lambda expression is, itself, treated as a function interface that can be assigned to the functional interface, as shown above. Java 8 has also provided a new functional construct shown below:
BiFunction<Integer, Integer, Integer> comparator = (a, b) -> a-b;
System.out.println(comparator.apply(3, 4)); // -1

Refer to the package java.util.function for more functional constructs: Function, Supplier, Consumer, Predicate, etc. One can also define the functional interface using @FunctionalInterface.
Interfaces may also have one or more default implementations for a method and may still remain as a functional interface. It helps avoid unnecessary abstract base classes for default implementation.
Static and instance methods can be accessed with  :: operator, and constructors may be accessed with ::new, and they can be passed as a functional parameter, e.g. System.out::println.

Streams: Much More Than Iterations

Streams are a sequence of objects and operations. A lot of default methods have been added in the interfaces to support  forEachfiltermap , and reduce constructs of the streams. Java libraries, which were providing collections, now support the streams. e.g. BufferredReader.lines(). All the collections can be easily converted to streams. Parallel stream operations are also supported, which distributes the operations on the multiple CPUs internally.

Intermediate Operations: the Lazy Operation

For intermediate operations performed lazily, nothing happens until the terminating operation is called. 
map (mapping): Each element is one-to-one and converted into another form.
filter (predicate): filter elements for which the given predicate is true.
 peek () limit(), and sorted () are the other intermediate operations.

Terminating Operations: the Resulting Operations

forEach (consumer): iterate over the each element and consume the element
reduce (initialValue, accumulator): It starts with initialValue and is iterated over each element and   kept updating at a value that is eventually returned.
collect (collector): this is a lazily evaluated result that needs to be collected using collectors, such as java.util.stream.Collectors, including  toList() joining(),  summarizingX()averagingX(),  groupBy(), and  partitionBy().

Optional: Get Rid of the Null Programming

Null-based programming is considered bad, but there was hardly any option to avoid it earlier. Instead of testing for null, we can now test for isPresent() in the optional object. Read about it — there are multiple constructs and operations for streams as well, which returns optional.

JVM Changes: PermGen Retired

The PermGen has been removed completely and replaced by  MetaSpaceMetaspace is no more part of the heap memory, but of the native memory allocated to the process. JVM tuning needs different aspects now, as monitoring is required, not just for the heap, but also for the native memory. 
Some combinations of GCs has deprecated. GC is allocated automatically based on the environment configurations. 
There were other changes in NIO, DateTime, Security, compact JDK profiles, and tools like jDeps, jjs, the JavaScript Engine, etc.  

Java 9: Continue the Tradition

Java 9 has been around us for more than a year now. Its key feature module system is still not well adapted. In my opinion, it will take more time to really adopt such features in the mainstream. It challenges developers in the way they design classes. They now need to think more in terms of application modules than just a group of classes. Anyway, it is a similar challenge to what a traditional developer faces through microservice-based development.  Java 9 continued adding functional programming features to keep Java alive and also improved JVM internals.

Java Platform Module System: Small Is Big

The most known feature of Java 9 is the Java Platform Module System (JPMS). It is a great step towards the real encapsulation. Breaking a bigger module in small and clear modules consists of closely related code and data. It is similar to an OSGi bundle, where each bundle defines dependencies it consumes and exposes things on which other modules depend.
It introduces an assemble phase between compile and runtime that can build a custom runtime image of JDK and JRE. Now, JDK itself consists of modules.
  ~ java --list-modules
java.activation@9.0.2
java.base@9.0.2
java.compiler@9.0.2
java.corba@9.0.2
...

These modules are called system modules. A jar loaded without a module information is loaded in an unnamed module. We can define our own application module by providing the following information in file  module-info.java:
requires — dependencies on other modules
exports — export public APIs/interfaces of the packages in the module
opens — open package for reflection access
uses — similar to requires.
To learn more, here is a quick start guide.
Here are the quick steps in the IntelliJ IDE: 
1. Create Module in IntelliJ: Go to File > New > Module - "first.module"
2. Create a Java class in /first.module/src 
package com.test.modules.print;
public class Printer {
    public static void print(String input){
        System.out.println(input);
    }
}

4. Add module-info.java : /first.module/src > New > package 
Image title
module first.module {
    exports com.test.modules.print; // exports public apis of the package.
}

5. Similarly, you need to create another module "main.module " and  Main.java:
module main.module {
    requires first.module;
}
package com.test.modules.main;
import com.test.modules.print.Printer;
public class Main {    
  public static void main(String[] args) {        
    Printer.print("Hello World");    
  }
}

6. IntelliJ automatically compiles it and keeps a record of dependencies and  --module-source-path  
7. To run the  Main.java, it needs --module-path or  -m:
java -p /Workspaces/RnD/out/production/main.module:/Workspaces/RnD/out/production/first.module -m main.module/com.test.modules.main.Main
Hello World
Process finished with exit code 0

So, this way, we can define the modules. Java 9 comes with many additional features. Some of the important ones are listed below

Catching up With the Rivals

Reacting Programming — Java 9 has introduced  reactive-streams, which supports React, like async/await communication between publisher and consumers. It added the standard interfaces in the Flow class.
JShell – the Java Shell - Just like any other scripting language, Java can now be used as a scripting language.
Stream and Collections enhancement: Java 9 added a few APIs related to "ordered" and "optional" stream operations. of() operation is added to ease up creating collections, just like JavaScript. 

Self-Tuning JVM

G1 is made the default GC, and there have been improvements in the self-tuning features in GC. CMS has been deprecated. 

Access to Stack

The StackWalker class is added to lazy access to the stack frames, and we can traverse and filter into it.

Multi-Release JAR Files: MRJAR

One Java program may contain classes compatible with multiple versions. To be honest, I am not sure how useful this feature might be. 

Java 10: Getting Closer to the Functional Languages

Java 10 comes with the old favorite var of JavaScript. You can not only declare types of free variables but you can also construct the collection type free. The following are valid in Java:
var test = "9";
test = 1.0;
var set = Set.of(5, "X", 6.5, new Object());

The code is getting less verbose and the magic of scripting languages is getting added in Java. It will definitely bring the negatives of these features to Java, but it has given a lot of power to the developer.

More Powerful JVM

This was introduced in parallelism in the case that full GC happens for G1 to improve the overall performance.
Heap allocation can be allocated on an alternative memory device attached to the system. It will help prioritize Java processes on the system. The low priority one may use a slow memory as compared to the important ones.
java 10 also Improved thread handling in handshaking the thread locally. Ahead-Of-Time compilation (experimental) was also added. Bytecode generation enhancement for loops was another interesting feature with Java 10.

Enhanced Language

In Java 10, we Improved Optional, unmodifiable collections API’s.

Conclusion 

We have seen the journey from Java 8 to Java 10 and the influence of other functional and scripting languages in Java. Java is a strong object-oriented programming language, and at the same time, now it supports a lot of functional constructs. Java will not only bring top features from other languages, but it will also keep improving the internals. It is evolving at a great speed, so stay tuned — before it phases you out! Because, Java 11, 12 are on the way!

Saturday, August 25, 2018

5 Hard Lessons From Microservices Development

Learn about challenges in tools, training, and methodology for microservices development learned the hard way, so you don't have to.


Microservices-based development is happening all around the industry; more than 70% are trying development of microservice-based software. Microservices simplify integration of the businesses, processes, technology, and people by breaking down the big-bang monolith problem to a smaller set that can be handled independently. However, it also comes with the problem of managing relations between these smaller sets. We used to manage fewer independent units, so there was less operation and planning effort. We need different processes, tools, training, methodology, and teams to ease microservices development.

Our Microservices-Based Project 

We have been developing a highly complex project on microservice architecture, where we import gigs of observation data every day and build statistical models to predict future demand. End users may interact to influence the statistical model and prediction methods. Users may analyze the demand by simulating the impact. There are around 50+ bounded contexts with 100+ independent deployment units communicating over REST and messaging. 200+ process instances are needed to run the whole system. We started this project from scratch with almost no practical experience on microservices and we faced lots of issues in project planning, training, testing, quality management, deployment, and operations.

Learnings 

I am sharing the top five lessons from my experience that helped us overcome those problems.

1. Align Development Methodology and Project Planning

Image title
Agile development methodology is considered best for microservice development, but only if aligned well. Monolithic development has one deliverable and one process pipeline, but here, we have multiple deliverables, so unless we align the process pipeline for each deliverable, we won't be able to achieve the desired effectiveness of microservice development.
We also faced project planning issues, as we could not plan the product and user stories well, which can produce independent products, and we could apply the process pipeline. For seven sprints, we could not demonstrate the business value to the end user, as our product workflow was ready only after that. We used to have very big user stories, which sometimes go beyond multiple sprints and impact the number of microservices.
Consider the following aspects of project planning:
  1. Run parallel sprint pipelines for Requirement Definition, Architecture, Development, DevOps, and Infrastructure. Have a Scrum of Scrum for common concerns and integration points.
  2. Keep few initial sprints for Architecture and DevOps, and start the Development sprint only after the first stable version of the architecture and DevOps is setup.
  3. Architectural PoCs and Decision tasks should be planned a couple of sprints before the actual development script.
  4. Define metrics for each sprint to measure project quality quantitatively.
  5. Clearly call out architectural changes in the backlog and prioritize them well. Consider their adaptation efforts based on the current size of the project and impact on microservices.
  6. Have an infrastructure resource (expert, software, hardware, or tool) plan.
  7. Configuration management.
  8. Include agile training in the project induction.
  9. Include multiple sprint artifact dependencies in the Definition of Ready (DoR) and Definition of Done.
  10. Train the product owner and project planner to plan the scrums for requirement definition, architecture, etc such that they fulfill the DoR.
  11. Have smaller user stories, making sure the stories selected in a sprint are really of the unit size which will impact very few deployment unit.
  12. If a new microservice is getting added in a particular sprint, then consider the effort for CI/CD, Infrastructure, DevOps.

2. Define an Infrastructure Management Strategy

In a monolithic world, infrastructure management is not that critical in the start of a project, so infra-related tasks may get delayed until stable deliveries start coming, but, in microservices development, the deployment units are small; thus, they start coming early, and the number of deployment units is also high, so a strong infrastructure management strategy is needed.
We delayed defining the infrastructure management strategy and faced a lot of issues getting to know the appropriate capacity of the infrastructure and getting it on time. We had not tracked the deployment/uses of infra components well, which caused a delay in adapting the infra, and we ended up having less knowledge of the infrastructure. We had to put lot of effort into streamlining the infra components in the middle of the project, and that had a lot of side effects on the functional scope getting implemented.
Infrastructure here includes cross-cutting components, supporting tools, and hardware/software needed for running the system. Things like service registry, discovery, API management, configurations, tracing, log management, monitoring, and service health checks may need separate tools. Consider at least the following in infrastructure management:
  1. Capacity planning – Do capacity planning from the start of the project, and then review/adjust it periodically.
  2. Get the required infrastructure (software/hardware/tools) ahead of time and test them well before the team adopts them.
  3. Define a Hardware/Software/Service onboarding plan which covers details of the tools in different physical environments, like development testing, QA testing, performance testing, staging, UAT, Prod, etc.
  4. Consider multiple extended development testing/integration environments, as multiple developers need to test their artifacts, and their development machine may not be capable of holding required services.
  5. Onboard an infrastructure management expert to accelerate the project setup.
  6. Define a deployment strategy and plan its implementation in the early stages of the project. Don’t go for intermediate deployment methodology. If you want to go for Docker and Kubernetes-based deployment, then do it from the start of the project — don’t wait and delay its implementation.
  7. Define access management and resource provisioning policies.
  8. Have automated, proactive monitoring on your infrastructure.
  9. Track infrastructure development in parallel to the project scope.

3. Define Microservices-Based Architecture and Its Evolutions

Microservices can be developed and deployed independently, but in the end, it is hard to maintain standards and practices throughout development across the services. A base architecture that needs to be followed by the microservices and then let the architecture evolve may help here.
We had defined a very basic architecture with a core platform covering logging, boot, and a few common aspects. However, we considered lot of things to come in evolutions such as messaging, database, caching, folder structures, compression/decompression, etc. and it resulted in the platform being changed heavily in parallel to the functional scope in microservices. We had not given enough time to the core platform before jumping to the functional scope sprints.
Consider the following in the base architecture, and implement it well before the functional scope implementation. Don’t rely too much on the statement “Learn from the system and then improvise.” Define the architecture in advance, stay ahead of the situation, and gain knowledge as soon as possible.
  1. Define a core platform covering cross-cutting concerns and abstractions. The core platform may cover logging, tracing, boot, compression/decompression, encryption/decryption, common aspects, interceptors, request filters, configurations, exceptions, etc. Abstractions of messaging, caching, and database may also be included in the platform.
  2. Microservice structure – Define a folder and code structure with naming conversions. Don’t delay it. Late introduction will cost a lot.
  3. Build a mechanism for CI/CD – Define a CD strategy, even for the local QA environment, to avoid facing issues directly in the UAT/pre-UAT environment.
  4. Define an architecture change strategy – how architecture changes will be delivered and how they will be adapted.
  5. A version strategy for Source Code, API, Builds, Configurations, and documents.
  6. Keep validating the design against NFRs.
  7. Define a Test Architecture to cover the testing strategy.
  8. Document the module architecture with clearly defined bounded contexts and data isolations.

4. Team Management

The microservice world needs a different mindset than the monolithic one. Each microservice may be considered independent, so developers of different microservices are independent. It brings a different kind of challenge: we want our developers to manage code consistency across units, follow the same coding standards, and build on the top of the core platform, and at the same time, we want them not to trust other microservices' code, as it was developed by some other company’s developer.
Consider the following in your team management:
  1. Define the responsibility of “Configuration Management” to a few team members who are responsible for maintaining the configuration and dependencies information. They are more of an information aggregator, but can be considered a source of truth when it comes to configuration.
  2. Define a “Contract Management” team consisting of developers/architects who are responsible for defining the interaction between microservices.
  3. Assign module owners and teams based on bounded context. They are responsible for everything related to their assigned module, and informing the “Configuration Management” team of public concerns.
  4. Team seating may be considered module-wise; developers should talk to each other only via contract, otherwise they are a completely separate team. If any change is needed in the contract, then it should come via “Contract Management.”
  5. Define the DevOps team from development team. One may rotate people so everybody gets the knowledge of Ops.
  6. Encourage multi-skilling in the team.
  7. Self-motivated team.
  8. Continuous Training Programs.     

5. Keep Sharing the Knowledge

Microservices are evolving day by day, and a lot of new tools and concepts are being introduced. Teams need to be up to date; due to microservice architecture, you may change the technology stack of a microservice if needed. Since teams are independent, we need to keep sharing the learning and knowledge across teams.
We faced issues where the same/similar issues were being replicated by different teams and they tried to fix them in different ways. Teams faced issues in understanding bounded context, data isolationss etc.
Consider the following:
  1. Educate teams on domain-driven design, bounded context, data isolation, integration patterns, event design, continuous deployment, etc.
  2. Create a learning database where each team may submit entries in the sprint retrospection.
  3. Train teams to follow unit testing, mock, and integration testing. Most of the time, the definition of a “unit” is misunderstood by developers. “Integration testing” is given the lowest priority. It must be followed; if taken correctly, it should be the simplest thing to adhere to.
  4. Share knowledge of performance engineering — for example:
    1. Don’t over loop
    2. Use cache efficiently
    3. Use RabbitMQ messaging as a flow, not as data storage
    4. Concurrent consumer and publishers
    5. Database partitioning and clustering
    6. Do not repeat  

Conclusion

Microservices are being adopted at a good pace and things are getting more mature with time. I have shared a few of the hard lessons that we experienced in our microservice-based project. I hope this will be beneficial for your project to avoid mistakes.