How big is your Electronic Tech’s Carbon Footprint?Follow article
How do you feel about this article? Help us to provide better content for you.
Thank you! Your feedback has been received.
There was a problem submitting your feedback, please try again later.
What do you think of this article?
Everyone knows that gaseous emissions from fossil-fuel burning to bovine flatulence are the cause of global warming, right? Indeed they are, but less obvious causes of a high carbon footprint are becoming apparent. And your ‘Tech’ may be involved.
Cryptocurrency as a ‘democratic’ monetary system was invented years ago when the Internet first revolutionised worldwide communication. It was seen as a way of wresting control of the money supply away from central banks controlled by politicians. The first, but not the only cryptocurrency was called Bitcoin. Before money was invented, trade was conducted by barter. I have three sheep and want to swap them for a cow. All I need is to someone nearby who has a cow to trade and wants three sheep in return. Life must have been frustrating in those days. So, the concept of a token with value arrived, with the advantage that it could be used to buy things. Don’t worry, I’ll get to what this has to do with carbon footprints soon.
Any monetary system based on tokens, initially coins, then notes and now data, requires three things to be true in order to work successfully:
- The supply of tokens must be strictly limited and only ‘minted’ by people authorised to do so. Traditionally in the UK that has always been the job of the Bank of England and the Royal Mint respectively. If everyone were allowed to make tokens, the currency would have no value.
- In order to ensure the above, the tokens must be impossible to counterfeit. In the early days, limiting the supply and preventing forgery was achieved by using a very rare material with which to make the coins – gold.
- Everyone using this currency has to believe the above statements are true.
These rules attempt to ensure that the only way you get your hands on money is in return for goods supplied or service performed – a transaction. The massive increase in overall wealth thanks to the industrial revolution led to the introduction of paper notes and coins made from commoner metals. This happened because there wasn’t enough gold around to cope with the expanding money supply and in any case, buying high-value items with large amounts of gold was somewhat inconvenient. Nevertheless, at first every note was backed by its equivalent value in gold, the latter being kept as bullion in the vaults of the Bank of England. Into the twentieth century and this gold standard was dropped, allowing governments to ‘print money’ as a solution to short-term economic problems. Nowadays we call it ‘Quantitative Easing’. Unchecked it always leads to hyperinflation and a collapse in the value of the currency.
Let’s get back to cryptocurrency. Those conditions outlined above are fundamental to the operation of any currency system, whether it be based on blocks of data in a computer or dollar notes. So how do we satisfy these requirements with no physical tokens and no central control of the money supply? The clue is in the name: cryptocurrency. Data encryption at a very, very high level is the key to creating ‘virtual’ money which satisfies the need for security and as it happens, provides a solution to the problem of regulating the money supply without centralised control. It may be distributed around a network but any transactional system needs a ledger or record of all transactions that have taken place. It must be in essence, write-once-only, but subsequently easily read by anyone. The ledger consists of linked data blocks each containing about ten minutes' worth of transactions encrypted to such a level it can never be fraudulently altered. The system is called Blockchain and it’s what lies behind Bitcoin operation.
Now comes the really clever bit: any individual who downloads the open-source software can just use Bitcoin to buy or sell things. Or they can set themselves up as a Bitcoin Miner. These are the people who create Bitcoins. Sounds unlikely, doesn’t it? They also encrypt the transactional data into the blocks of the blockchain. I say ‘they’: what actually happens is that all the miners on the network work in competition to form the datablock and the first to complete the task, taking about ten minutes, ‘wins’. Their new block is linked to the previous one and they are rewarded with Bitcoins and all the transaction fees. All the other miners get nothing. And that’s how the ledger is formed and Bitcoins are ‘minted’, currently at the rate of 12.5 every ten minutes. The competitive element is a clever way of solving the issues caused by a lack of central control:
- Any fraudulent changes to the data made by a miner will be detected when their solution is compared with the others.
- As computing technology improves, the time taken to reach a solution reduces, but the system compensates by increasing a difficulty factor in the algorithm over time. This ensures that new Bitcoins are still only issued at about ten-minute intervals.
Using a winner-takes-all competition does have a serious downside however: it encourages the miners to invest in more and more powerful computers to guarantee they win. More processing power means more electrical power is needed, resulting in an ‘arms-race’ which has only one outcome: a massive increase in CO2 production worldwide. At the time of writing Bitcoin mining (never mind all the other cryptocurrencies) uses about 45 TWh of electricity and has a carbon footprint of around 22 Mt of CO2. That’s about the same as a small country and it’s rising fast. It’s not referred to as ‘mining’ for nothing; just think of all the energy expended by those prospectors in the California Gold Rush nearly 200 years ago. A lucky few made fortunes, the rest probably died trying.
Shocked by that? Well, it gets worse and once again digital processing of vast amounts of data is involved. Artificial Intelligence systems based on Deep Learning with Artificial Neural Networks are also fast becoming a concern because of their power consumption during the ‘training’ phase. A recent analysis has shown that since 2012, the amount of processing-power used in the largest AI training runs has been increasing exponentially with a 3.4 month doubling time. Reasons for this include:
- Massive improvements in digital computer technology allowing ever-larger datasets to be processed with realistic timescales.
- Cheaper technology allowing more researchers to gain access to these number-crunching machines.
- Huge investment by the industrial sector in developing practical applications for Artificial Intelligence applications, such as object characterisation within the vision systems of autonomous vehicles.
That last application and others with similar safety-related issues may be behind the increasing size of the training data sets. AI has been seized upon as a potential technique for extracting image information in real-time; fast enough for the driving computer to make crucial decisions and act upon them. A snag is that trained neural networks can never deliver 100% accuracy. An object recognition system trained with a large number of images might achieve 95%, but that’s not good enough for a safety system analysing a 30 frames per second video feed. But just like biological intelligence, more training usually leads to better results. And there’s the problem: no biological brain can be trained to achieve perfection and neither can AI. Just to get close involves an exponential rise in effort and hence electrical power consumption. Researchers have shown in this recent paper that a very large training algorithm may have as much as five times the carbon emissions footprint of a single fossil-fuel-powered car in its lifetime! That doesn’t include the power required by the on-board ‘inference engine’ which actually runs the neural network containing the trained dataset. On-board computing can account for several kilowatts of electrical power – a pretty big extra load for the battery of an electric vehicle, significantly reducing its range between charges.
The ‘hype’ surrounding the introduction of the new cellular wireless network protocol has been extraordinary recently. The list of proposed capabilities is certainly impressive:
- Downlink data speeds up to 20 Gbits/sec.
- Downlink latency as low as 0.5 ms.
- Subscriber capacity increased by a factor of 100.
If these can be achieved then smartphone subscribers will be able to download movies in seconds, factories can scrap cables and have smart machinery communicating wirelessly, and all future autonomous vehicles will be able to ‘talk’ to each other and roadside infrastructure. The key to this exciting vision is being able to use much higher RF carrier frequencies than at present – the so-called ‘mm’ or millimetre wavelength bands. Only these frequency bands can accommodate the channel bandwidths able to provide those huge data rates. As usual, there is a price to be paid for all this increased speed and capacity, and once again it comes down to power. These mmWave signals have a much-reduced range because of factors like moisture in the air and their inability to penetrate the walls of buildings. This means the density of cell towers in the city will have to be massively increased and all sorts of clever antenna technology developed. It is highly likely that the electrical power required by an extensive nationwide 5G network will be considerably more than that taken by current 3G and 4G installations. There’s not much mmWave 5G about yet (if any), so I guess we’ll have to wait and see how much power it needs and if it lives up to the hype.
If you're stuck for something to do, follow my posts on Twitter. I link to interesting articles on new electronics and related technologies, retweeting posts I spot about robots, space exploration, and other issues.