Yes, you read that right, and no, nobody has spiked your morning coffee with psychotropic drugs.

10M tx/sec, 100ms max latency, 0.1% fee.

Bold claims indeed for a blockchain project.

We have seen similar claims before and, as always, the devil is in the details.

Currently, tx/sec seems to be the focus for many new projects to drive adoption and scalability and rightly so. The mainstream has always needed a technology to be fast, inexpensive and easy-to-use before the mass adoption tsunami really takes off, so this needs to addressed urgently.

Bitcoin, despite all its wonderful attributes has pitiful scaling in these things and the fees structure has been historically horrible in recent years. Ethereum and others haven’t fared much better either, despite claims to the contrary. Both projects are working hard to fix these things and Lightning Network, together with Vitalik Buterin’s forthcoming scaling updates for Ethereum, will go a long way to addressing these issues effectively.

However, tx/sec is only one facet of the solution. Any proposed solutions must be scalable, secure, and possess low latency and versatility. This is a VERY big ask — especially in a new industry such as ours and the team who can achieve this has to be something a little bit special.

Of all the offerings out there, Harmony looks to be at the head of the queue in terms of quality and credibility.

The Proposal/Concept — Pt.1: Transport Protocol

Before we really dive in we should determine exactly what Harmony is, in relation to the other parts of blockchain as a whole. Both are environments which enforce rules for the benefit of all without the inherent weaknesses and biases of centralised systems.

Ethereum as an environment is essentially an infrastructure platform for enforceable contracts and applications where many things can be built using its tools etc.

An environment is only as effective as its data transport infrastructure, and block processing and processing bottlenecks also add latency, delays and fees. Those three things are being addressed by Vitalik and his team, and should be significantly improved in the near future, but the transport infrastructure is still that of the internet itself — a mish-mash of decades-old protocols held together by the OSI stack, which was never designed at inception for interoperability.

Historically, the major bottlenecks for the Internet have always been at layers 1–4 (left-hand column). Layers 1 and 2 are physical and data — things like broadband, cable, switches etc. These are significantly improved compared to where they were 20 years ago, as we can see by our home and mobile internet capabilities now.

But layers 3 and 4 — mainly TCP and UDP are still effectively acting as bottlenecks. TCP is slower since it is a connection protocol where information is sent sequentially after a connection is established and all info packets are verified at each end — this makes it slow, comparatively speaking. QUIC UDP is faster and leaner and not verification and connection based; it broadcasts smaller packets/chunks of the data in question and verification and reassembly of the data is handled by applications sitting higher up the OSI stack (mainly layers 5, 6 and 7 above).

As the Harmony WP states, and they are quite right to put it this way, any protocol bottlenecks (network or blockchain) are usually found in implementation rather than in the protocols themselves:

It’s not WHAT you build but HOW you build it:

Google’s QUIC UDP is an excellent choice for transmission and throughput, and leaving the uppermost layers to tidy up the sequential and error issues resulting from the usual issues encountered when using UDP as opposed to TCP is a very elegant and efficient way of resolving any problems without sacrificing speed.

The technical quality of the team and their experience as part of the Google infrastructure engineering team really shines through in their choice of technology for the transport protocol part of the solution. The team’s background (detailed later) gives high confidence in their ability to implement this part of Harmony.

More detailed resources and information on QUIC and UDP can be found here:

https://www.diffen.com/difference/TCP_vs_UDP

https://www.zdnet.com/article/google-speeds-up-data-transfers-with-quick-udp-internet-connections/

The QUIC Transport Protocol: Design and Internet-Scale Deployment

Taking a Long Look at QUIC: An Approach for Rigorous Evaluation

It’s also future-proof for the medium term in that the Harmony transport protocol is designed with Huawei’s 5G vision in mind:

5G Vision: 100 Billion connections, 1 ms Latency, and 10 Gbps Throughput

The combination and integration of these fully-developed technologies make for a powerful proposition for the transport protocol part of the Harmony platform

The Proposal/Concept — Pt.2: Consensus Protocol

What to choose? There are so many out there now and they seem to proliferate on an almost daily basis as more projects enter the industry. What would really help is a study measuring the efficiency of all consensus protocols against each other in terms of effectiveness as mechanisms for generating fast and secure consensus.

The problems are many here:

  1. Vast amount to choose from
  2. Few/no clear comparisons between them
  3. Very few are peer-reviewed or academically researched since most are industry/project-generated.
  4. Lack of clarity/agreement as to what determines quality for a consensus protocol
  5. Few like-for-like comparisons — most seem to stand in isolation and only pertain directly the projects they are utilised for.

Before we go into more detail, it would help to define what the key metrics for a quality consensus protocol actually are, and then studying how the choice made for Harmony measures up in relation to those and to the other parts of the project.

There is one study/survey undertaken by UCL London and the Alan Turing Institute which can help us here — it was done in 2017 but the metrics used are still very relevant and still apply:

Consensus in the Age of Blockchains

I can’t recommend this document highly enough as a general overview for what determines consensus protocol quality.

The key evaluation criteria areas are:

  1. Security
  2. Performance
  3. Design

“Our evaluation framework describes systems along three broad themes: security, performance, and design aspects. In terms of security, we consider three properties: consistency (i.e., whether or not the system will reach consensus on a proposed value), transaction censorship resistance (i.e., the system’s resilience to malicious nodes suppressing transactions), and DoS resistance (i.e., the system’s resilience to DoS attacks against nodes involved in consensus). In terms of performance, we consider throughput (i.e., the maximum rate at which values can be agreed upon by the consensus protocol), scalability (i.e., the system’s ability to achieve greater throughput when consensus involves a larger number of nodes) and latency (i.e., the time it takes from whena value is proposed, until when consensus has been reached on it)”

Consensus protocols themselves are broken down in to 3 broad areas:

  1. Proof of work
  2. Proof of X
  3. Hybrid consensus protocols

Any consensus protocol should exhibit ‘liveness’ and ‘safety’:

“For liveness, validity ensures that if a node broadcasts a message, eventually this message will be ordered within the consensus, and agreement ensures that if a message is delivered to one honest node, it will eventually be delivered to all honest nodes. For safety, integrity guarantees that only broadcast messages are delivered, and they are delivered only once, and total order ensures that all honest nodes extract the same order for all delivered messages.”

When measured against all the criteria involved for a quality consensus protocol in the document, and considering the project directives (speed, safety, liveness, throughput and scalability) that drive Harmony, the team opted for OmniLedger, clearly stating that it seems to be the best match amongst the rest of the options currently available and provides the best foundation for Harmony’s needs:

Most Cited Byzantine Consensus Publications

OmniLedger: A Secure, Scale-Out, Decentralized Ledger via Sharding

Algorand: Scaling byzantine agreements

Bitcoin-NG: A Scalable Blockchain Protocol

SPECTRE: Serialization of Proof-of-work Events: Confirming Transactions via Recursive Elections

As can be seen, extensive comparative research was undertaken in detail before Omniledger was selected as the optimal consensus foundation on which to build Harmony:

“After extensive research, we conclude that OmniLedger is the most scalable permissionless protocol. Most importantly, its publication OmniLedger: A Secure, Scale-Out, Decentralized Ledger via Sharding is peer reviewed in the top research conference IEEE Symposium on Security and Privacy. OmniLedger is tested with 1,800 hosts (25 committees, each consisting of 72 nodes) and 13,000 tx/sec.”

For the solution to work bottlenecks have to be minimized in all areas and the different parts of the project have to match in terms of scalability and throughput, whilst maintaining security and efficiency in terms of consensus.

The choices made for parts 1 and 2 are clearly very good in terms of what Harmony is looking to achieve.

But what of the system tools needed to develop the platform fully and use all those synergies in parts one and two effectively?

The Proposal/Concept — Pt.3: System Tooling

Here I have to confess something — despite many years in network engineering prior to getting into crypto, system tooling is not a core part of my skillset and is also pretty hard to break down for non-technical people, so please bear with me.

Part 1 deals with network transport via QUIC, part 2 deals with block processing etc. and consensus.

Both of these are superbly addressed, but will not perform properly unless bottlenecks in processing within the system are addressed at the same time; choices in terms of system tooling address this issue.

Many existing approaches are not ideal for Harmony, so the team took a different approach:

“One principle guiding our work is to actively seek novel architecture and to use optimal languages.”

This approach is incorporated in all 3 parts of the solution, using tried and tested technology which is then integrated to create something which comprises something greater than the sum of its individual parts.

The whitepaper puts it this way:

Measuring maximum sustained transaction throughput

Terabyte blocks for Bitcoin Cash

A roadmap for scaling Bitcoin Cash

Mosaic: Processing a trillion-edge graph on a single machine

Destination-Passing Style for Efficient Memory Management

Unikernels: The Rise of the Virtual Library Operating System

Reliable Messaging to Millions of Users with Migratory Data

All these options address the bottleneck areas of concern which are to be expected in building a protocol at this scale, and which must be addressed completely, namely:

  1. Eliminating Parallel processing bottlenecks
  2. Quality implementation
  3. Optimal backend architecture
  4. Systems optimization options — memory maps, parallelization, indexing unspent outputs
  5. Minimizing switching with the elimination of context switches to kernels
  6. Saturating the network capacity of the system (in parts 1 and 2 of the solution) with lock-free multi-core algorithms and allocator-free regional memory management (Rust)

They seem to have ticked pretty much all the boxes in these areas I think, and the depth of the knowledge and the thoroughness of the research undertaken to make these choices is very impressive.

There is also a mining element — which is covered by a simple fee and rewards system:

For more info on the mining metrics and fees, please see the FAQ in the WP.

Rethinking Bitcoin’s Fee Market

Security and Fairness in terms of Consensus:

These issues are being relearnt by Bitcoin and Ethereum (DAO) and others; common examples include:

As well as parity multi-sig wallet attacks, 51% attacks and mining algorithm hijacking attacks (XVG for last two examples); it’s a long list that is getting longer.

Harmony addresses this by primarily opting for the very intelligent approach of using Language-based security:

The Min language is complete by the way and hosted under its own Github repository with a full list of commits.

Language-based security

The Science of Deep Specification

Fairness is dealt with by utilizing distributed commits with auditing of contracts, together with the algorithms driving the Omniledger architecture detailed earlier (see WP and above):

Use-cases:

So, with this beautifully-crafted protocol what can we actually use it for?

This is the key question for an investor and other interested parties (‘show me the money!’).

Well, what could you do with something that has this kind of speed, scalability and versatility?

Imagination is the only real restriction here.

Here are a few possibilities posited in the WP:

In fact, if you integrated AR into the above 2 examples, you could use also it to examine the future visual, community and environmental impacts of new construction and infrastructure before it commences — you could SEE the future construction/infrastructure by mapping AR images onto the actual location as you stood in front of it and walked around it.

This would be invaluable for architects, civil engineers, surveyors, public and environmental authorities and others worldwide.

Also the drone swarms example indicated in the image above has huge potentials and synergies for smart cities and agriculture and disaster relief for example.

The use-cases and possibilities here for Harmony will only increase over time — the growth potential is very high.

Another huge use-case is:

IoT marketplaces badly need something like Harmony. With millions of IoT devices (and this number will increase significantly going forward) being deployed daily, more and more of them will need to interact with each other in a fast, secure and massively scalable manner involving what will be billions of low-latency microtransactions at speed.

Cars and houses dealing in energy trading, devices trading in location, purchasing and usage information for the benefits of their owners, device software updates, device status updates for maintenance and security and many, many other possible scenarios are only now just evolving and appearing as this massive future market develops at breakneck speed.

Harmony, with this kind of scaling and speed etc., seems to be the only real contender to actually transport and deliver this kind of traffic with the speed and low latency required in the needed volumes, and according to market and technical requirements.

The others so far, despite the bold claims and quality code, are just not scalable/big or fast enough.

No doubt many other use-cases will be found that will be an excellent fit here, since the potential and scale of Harmony are more than large enough to allow for plenty of creativity and flexibility.

Team

In a word — ‘STACKED’ from a technical perspective, with 5 PhD holders in senior infrastructure and coding execs who were formerly with Google, Apple and Microsoft etc. — backed up by extensive top tier entrepreneurial and technical pedigrees:

Roadmap

Proceeding to launch on schedule as planned, the Harmony team is now on the second of the bullet areas indicated here:

Token metrics and Sales information:

There is no released info re the caps and any ETH pegs as of yet, but educated guesswork tells me it will be within the sweet spot of $40M tops with a good/fair peg; would expect nothing less with a quality team like this (smart and sensible people). Information to fill in these areas should be forthcoming very soon.

Here is what we have so far:

Lockups are good for the team — guaranteeing long term commitment — and the 40% up front + the 10% monthly is a nice way to build commitment as the momentum builds to a mainstream exchange listing at the end of 2018.

In short, so far the information in this area is nicely modelled and balanced in terms of flexibility, gains, growth potential and commitment.

Conclusion

Transactions on blockchains also need to scale, including contract resolution, payments, transfers — the usual things many projects are looking to address.

What is missing is a protocol architecture and infrastructure to deliver all these things with the needed speed, scale, security, fairness and openness — and which is capable of growing as the size of the network grows — until now.

To give you an idea of the rapidly incoming needs in this space, SWIFT does just over 30M transactions per day, Visa maybe 24000tps, and there are many others of similar scale globally.

A lot of these legacy technologies will be replaced by blockchain and so blockchain has to scale significantly in this area in order to deliver on its potential and gain mass adoption.

The protocols and the environment have to scale, so ambitious projects like Harmony and others like NKN are trying to address this need and help fill this gap. Also there will be more people seeking to populate this space.

As someone who has worked in network engineering and infrastructure since 1995, with a colleague who has similar experience, we both agree that in terms of protocol engineering for transport, delivery and processing, and the combination of the technology choices, this thing is like a Da Vinci for us technically-speaking.

Technically it’s a thing of beauty.

The technical depth, quality and sheer beauty and magnitude of the work and the vision involved is truly breathtaking — and will go very VERY far.

The Harmony Mothership is approaching folks — she is coming VERY soon — and we all expect great things when she gets here.

--

--

GreedyFerengi

Crypto professional, writer, investor, advisor. A channel to contain my articles on projects and other topics of interest.