Issue #64: Talking with Security Experts
One of my responsibilities (and joys) at Forta is hosting our now regular Smart Contract Security Roundtable Series on Twitter Spaces. As someone who doesn’t have a security background, I can tell you the quickest way to learn/understand a new area is by talking to experts and I’m fortunate it’s part of my job description.
The roundtables have covered a variety of topics ranging from NFTs, cross-chain bridges and Web 2 vs. Web 3, and we’re been lucky to have guests like @nassyweazy (CISO @ a16z), @mudit_gupta (CISO @ Polygon) and @3LAU (DJ and Co-founder of Royal). If we get @samczsun, I’m retiring…
While the insights from the first three months of roundtables are fresh in my mind, I figured I would cherry pick some of the most interesting and share them with you.
Let’s zoom in…
Web 2 vs. Web 3 Security
Web 3’ers (myself included) have a tendency to draw a line in the sand. It’s either a Web 2 application or a Web 3 application. There is no middle ground.
The reality is we don’t simply turn Web 2 off and turn Web 3 on. It’s a transition. A transition that will take many years. For the time being, every Web 3 application has some “Web 2”.
Prime example…the front end. Smart contracts are great, but the average user isn’t going to the command line to interact with your application. You need a front end; a user interface that abstracts away a lot of blockchainey stuff so a non-engineer can use your application. Unlike your application’s smart contract which are hosted on the blockchain, your front end UI is maintained and hosted by someone, somewhere (Web 2). Many dApp’s often rely on centralized services for data (Alchemy, Infura) and performance related services (Cloudflare) too.
Web 3’s reliance on Web 2 components means the surface area of security risk extends well beyond the smart contracts. One of the larger hacks in recent memory - BadgerDAO’s $120M loss - involved a compromised Cloudflare API used it to inject malicious code in Badger’s front end. Badger’s smart contracts were fine. The Web 2 components were the problem.
Teams prioritize smart contract security for good reason and often set aside six figures for contract audits, but they need to be mindful of best practices around things like UI/application security, social engineering, and as we’ll see later, private key management and access controls.
Bridges have taken it on the chin recently. There have been $2B in hacks targeting cross-chain bridges, including last month’s Ronin bridge back for over $600M.
We spoke with @mudit_gupta, Chief Information Security Officer at Polygon, a few weeks ago and Mudit did a great job of describing the three types of cross-chain bridges that exist today and the risk profile of each.
Type 1 - Centralized Bridge
Centralized bridges are essentially hot wallets straddling the fence between multiple chains. They hold a user’s assets on one chain and issue them a corresponding amount of tokens on another chain. Liquidity on both sides is managed by the centralized entity. Binance is probably the best example of a centralized bridge operator, straddling the fence between Ethereum and Binance Smart Chain.
The security risks of a centralized bridge are the same security risks that exist for exchanges and custodians. Their primary responsibility is securing private keys (key management), and as a result centralized bridges have proven to be pretty secure.
Type 2 - Proof of Stake Bridge
Proof of Stake bridges are like little blockchain networks narrowly focused on facilitating cross-chain activity. Whereas centralized bridges are managed by a single entity, proof of stake bridges are managed by a group. They often involve multisigs or some form of escrow mechanism controlled by a group of signers/validators that watch and vote on the ability to unlock corresponding assets on another chain.
Because POS bridges involve both smart contracts and a group of centralized gatekeepers, they inherit all the code risk of Web 3 and the traditional security risks of Web 2 (key management, access controls). POS bridges have the most attack vectors, and have unfortunately been the victims of most of the major exploits.
Ronin - $600M
Wormhole - $300M
Type 3 - Decentralized Bridge
Decentralized bridges take a proof of deposit from one chain and validate it on the other chain (ex: Polygon Plasma Bridge). Decentralized bridges are all code. They don’t rely on centralized signers/validators, so while there’s more code risk, they don’t have to worry about the traditional security risks that POS bridges deal with.
Decentralized bridges are newer and facilitating less activity than the other bridge types, but there are no known exploits.
Changing Over Time
Whether, and how much, to invest in security has always been a business decision. How much security do I get for spending $X, and what is the risk of not doing something?
The important thing to keep in mind is that a project’s risk profile changes over time, and often rapidly. The cost benefit analysis you did yesterday may be different today.
A new DeFi protocol that launches on Day 1 has $0 TVL and zero users. It has the lowest possible risk profile, and the founders may reasonably determine it isn’t worth investing more money in security beyond the standard code audits and basic monitoring/alerting. However, a month from now, the protocol may have a $100M TVL and 5,000 users. It’s risk profile is significantly higher, and the cost-benefit analysis of investing more in security is a lot different.
Your investment and sophistication around security should evolve with your project’s risk profile, and you should be thinking about what that evolution looks like before it happens. What will your security look like when your TVL hits $10M? $100M? $1B?
A lot of projects don’t think about this until it’s too late.
Have a plan.
Another unfortunate reality of your DeFi project attracting more users and liquidity is it attracts hackers of increasing sophistication. It’s risk vs. reward, and the bigger the reward, the more time and money hackers are willing to spend.
There is no hard and fast rule here, but the general framework looks like this…
TVL < $1M. Below a certain TVL, you don’t expect any hacker to pay attention. It’s not worth their time.
TVL > $10M. You’re on their radar now, but you aren’t big enough to warrant a dedicated effort. Hackers will use automated techniques and employ those techniques across dozens of protocols at the same time. Their investment (time) is fixed, but their potential return is not.
TVL > $100M. Now you’ve got their full attention. Hackers are incentivized to spend a lot more time. You can expect them to do thorough research on your protocol and think about novel attack vectors. Spending weeks or months searching for bugs, or executing social engineering attacks, is worth it because the potential bag is much bigger.
Another reminder that your approach to security should evolve as your application grows.
In the last 9 months, I’ve spoken with 100+ Web 3 projects about their approach to security. I’m not naive enough to think security will ever be their top priority - startups will always prioritize product and user acquisition over everything else - but I am hopeful that a combination of better tools, better standards, and more Web 2 talent migrating to Web 3 will level the playing field between Web 3 teams and hackers. Right now, we’re outgunned.
Thanks for reading,
Not a subscriber? Sign up below to receive a new issue of 30,000 Feet every Sunday.
Fascinating stuff Andy. Thanks for this.