Welcome!

Artificial Intelligence Authors: Yeshim Deniz, Pat Romanski, Elizabeth White, Liz McMillan, Zakia Bouachraoui

Related Topics: Server Monitoring, Microservices Expo

Server Monitoring: Article

JavaOne 2008: SOA and Performance

A well performing SOA implementation requires considerable work and performance tuning

DeCare Systems Ireland Blog

As Service-Oriented Architectures gain ground, it becomes obvious that their performance is the key to their success. I'm going to briefly write about two sessions that I attended in JavaOne 2008. They outline two totally different approaches from two very different companies. You're going to see that a well performing SOA implementation requires considerable work and performance tuning expertise.

35 Million Transactions Per Day

In the session entitled “SOA and 35 Million Transactions per Day: Mission Impossible?“, Matthias Schorer from Fiducia IT AG talked about their SOA architecture and how they manage to get a throughput of 35 million transactions per day, a typical day being around 10 hours. If my notes don’t fail me, in Fiducia, they are responsible for 19% of the banking operations in Germany. They manage 101,000 PCs, 54 million bank accounts, billions of transactions every year, 780 cooperative banks, 31 banks over a 510 TB database… To say the least, it is a colossal system.

Their core banking system contains around 2,500 services and 8,400 Java classes and interfaces distributed over 280 UNIX servers. For them SOA means the “Same Old Architecture” because they started to design services before SOA became an industry buzzword. Almost all of their services are non-Web-Service (the Big Web Services as RESTful camp would call them) services. They chose not to introduce extra layers in order to consume them internally. They use SOAP over HTTP when they need to open a service to the outside world.

It is good and relieving to see companies, such as Fiducia IT AG, talking about their pragmatic SOA architecture. It is important to demonstrate that the major idea behind SOA, the thing that one should get right, is the Service-Orientation. I remember, 2 years ago, on an online Java forum, I talked about creating a service-oriented architecture without Web Services. And you can probably imagine the big wave of reaction I got back. One doesn’t have to pay big bucks to an ESB vendor to have a solid SOA. Half of the job is to get the Service-Orientation right.

Fudacia IT AG have various front-end channels: Bank branches use a Swing-based rich GUI, Home Banking users use their browsers to access the system. They also have Self Service channels and Call Center channels. So, they have various clients for their services, and they have channel-specific and channel-neutral services. One of the good design decisions that they made was to create component clusters (in SCA language, those would correspond to Composites) and guarantee the transactional integrity within a cluster.

Fudacia run their SOA on a grid platform that they’ve built themselves. Furthermore, their system behaves like an ESB thanks to it’s event-driven nature (based on messages and queues). Their environment is not only distributed in a LAN, it is also geographically dispersed for security/safety reasons given the data that they manipulate. As they have numerous customers, they end up running different versions of their system, which adds more challenges. Their portal runs on more than 1,000 servers and they partitioned their data based on their customer ID. They implemented their messaging using Oracle Advanced Queue.

Some lessons learned over the years:
- They don’t keep data in the queues. Queues only contain pointers to the data stored in the database.
- They made sure that the pointers can be recreated if a queue fails.
- They monitor the grid. This visibility is very important therefore services have to be traceable.
- They used an event-driven approach
- They gave the priority on online transactions over batch transactions
- They distributed the transaction processing by creating chunks of the data file and sending the chunks to various queues.

The SEPA (Single Euro Payments Area) requirements posed a real challenge to their system: According to the SEPA, a payment file has to be processed the same day. The data exchange is based on UTF8-encoded XML files and somebody thought that it was alright to set a maximum limit of 10 million payments for a file. Schorer said that the biggest file they had encountered had a size of 70GB. Yuk!

They also discovered that JAXB worked much better than XMLBeans in terms of performance. And depending on the size of the XML transactions to be processed, they used StAX or SAX. Up to 100,000 records, they used StAX. After that, they used SAX.

ESB-based SOA for Intel’s IT Department

Another session about converting to SOA was given by David Johnston and CJ Newburn from Intel. They talked about how Intel’s IT department converted their operations to an ESB-based SOA. In this system, they had around 250 services interacting with each other.

They had to go through various iterations and it’s not a surprise that -to quote Johnston- “the first iteration killed the performance“. During the following iterations, the development team tuned the environment so that the performance became acceptable.

It is really important to listen to the rare case studies such as these. Because you don’t get these real-life experiences when talking to a vendor or reading an SOA book. However these are the things that we see on the field. Despite that, it is not always easy to convince everyone that SOA is not a silver bullet and it comes with some compromises. To reiterate the comparison stats between an old system call and a new system call:

Legacy direct call: 28 msecs
First iteration ESB call: 332 msecs
Optimized/Tuned ESB call: 38.5 msecs
Optimized/Tuned legacy call: 18 msecs
After fine tuning their SOA implementation deployed on an ESB, Intel decided that a 1.4x performance degradation was tolerable.

As you can see, even though using an ESB has some benefits, it comes with its drawbacks. The same is true for various implementations of a Service-Oriented Architecture. Depending on the implementation technologies used, it’s natural to see some overhead compared to direct in-process calls. Intel was content with a performance degradation of 1.4x however not all businesses and operations can allow that.

Johnston and Newburn said that the smart test bed that they had built was one of the key points of their successful SOA implementation. They also had a dedicated lab to run the new system in where they’ve run their simulations, configured the required latencies etc.

An important point for a successful SOA implementation is the system visibility and service traceability. Intel used them to fine-tune their system by answering various questions: What’s happening in each step? How long each step takes? Etc. This is where an ESB may be of help as it may provide enterprise-wide logging and monitoring, among other things. It is also important to see the “Latency Pipeline Breakdown” to identify the bottlenecks and go for the biggest bang for the buck when fine-tuning an SOA.

They considered various points when performance analyzing their system:
- Concurrency & Throughput
- Concurrency & Latency
- Message Size
- Scenarios to discover SOA overhead
- XML manipulations and transformations
- Heap Size
- Garbage Collection (GC)

For example, they discovered that the message size does matter. They had a sharp decrease in performance with messages bigger than 64KB. The heap size is important mostly for big message manipulations and GC doesn’t really matter when working with small messages. They used their own (Intel’s) XML acceleration product in order to improve XML manipulation performance and they saw a 1.4x performance increase in overall.

They had two very important points in their summary:
- A SOA implementation can take months of tuning
- Transition to SOA is prohibitively expensive without tuning

More Stories By Yagiz Erkan

Yagiz Erkan is lead Technical Architect, DeCare Systems Ireland, since 1998, and has over 10 years industry experience. He is also the development champion within the company and is responsible for guiding and directing the technical evolution of the software development processes at DSI. He manages an architectural group who ensure that the most up to date and suitable practices are utilized, leading to the delivery of high quality applications and solutions.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...