While many people in the creative industries are worrying that AI is about to steal their jobs, Oscar-winning film director James Cameron is embracing the technology. Cameron is famous for making the Avatar and Terminator movies, as well as Titanic. Now he has joined the board of Stability.AI, a leading player in the world of Generative AI. In Cameron’s Terminator films, Skynet is an artificial general intelligence that has become self-aware and is determined to destroy the humans who are trying to deactivate it. Forty years after the first of those movies, its director appears to be changing sides and allying himself with AI.
OpenAI, the company that made ChatGPT, has launched a new artificial intelligence (AI) system called Strawberry. It is designed not just to provide quick responses to questions, like ChatGPT, but to think or “reason”. This raises several major concerns. If Strawberry really is capable of some form of reasoning, could this AI system cheat and deceive humans? OpenAI can program the AI in ways that mitigate its ability to manipulate humans. But the company’s own evaluations rate it as a “medium risk” for its ability to assist experts in the “operational planning of reproducing a known biological threat” – in other words, a biological weapon.
As is their tradition at this time of year, Apple announced a new line of iPhones last week. The promised centrepiece that would make us want to buy these new devices was AI – or Apple Intelligence, as they branded it. Yet the reaction from the collective world of consumer technology has been muted. The lack of enthusiasm from consumers was so evident it immediately wiped over a hundred billion dollars off Apple’s share price. Even the Wired Gadget Lab podcast, enthusiasts of all new things tech, found nothing in the new capabilities that would make them want to upgrade to the iPhone 16.
The US Department of Justice may be on the verge of seeking a break-up of Google in a bid to make it less dominant. If the government goes ahead and is successful in the courts, it could mean the company being split into separate entities – a search engine, an advertising company, a video website, a mapping app – which would not be allowed to share data with each other. While this is still a distant prospect, it is being considered in the wake of a series of rulings in the US and the EU which suggest that regulators are becoming increasingly frustrated by the power of big tech.
It’s now almost two years since Elon Musk concluded his takeover of Twitter (now called X) on 27 October 2022. Since then, the platform has become an increasingly polarised and divisive space. Musk promised to deal with some of the issues which had already frustrated users, particularly bots, abuse and misinformation. In 2023, he said there was less misinformation on the platform because of his efforts to tackle the bots. But others disagree, claiming that misinformation is still rife there. A potential reaction to this may be apparent in recent data highlighted by the Financial Times, which showed the number of UK users of the platform had fallen by one-third, while US users had dropped by one-fifth.
The artificial intelligence boom has already changed how we understand technology and the world. But developing and updating AI programs requires a lot of computing power. This relies heavily on servers in data centres, at a great cost in terms of carbon emissions and resource use. One particularly energy intensive task is “training”, where generative AI systems are exposed to vast amounts of data so that they improve at what they do. The development of AI-based systems has been blamed for a 48% increase in Google’s greenhouse gas emissions over five years.
The US Department of Justice may be on the verge of seeking a break-up of Google in a bid to make it less dominant. If the government goes ahead and is successful in the courts, it could mean the company being split into separate entities – a search engine, an advertising company, a video website, a mapping app – which would not be allowed to share data with each other. While this is still a distant prospect, it is being considered in the wake of a series of rulings in the US and the EU which suggest that regulators are becoming increasingly frustrated by the power of big tech.
Scientists at ETH Zurich have developed a special coating that prevents the lenses in glasses from fogging up. Apparently, not all heroes wear capes. This has been a problem since the advent of optical lenses, but it’s fair to say it reached a peak during the pandemic when everyone wearing glasses found out the hard way that most face masks vent your breath up towards your eyes. You’d think someone would have fixed this by now, but it’s harder than you might guess. The difficulty of the problem is evident by the lack of current solutions. You can wipe your glasses off when they fog up or… well, that’s pretty much it.
The UK government’s plans to weaken encryption can “easily be exploited” by hackers and officials, experts have warned. The proposals are part of the controversial Online Safety Bill, which is currently working its way through parliament. Ministers say the legislation would make Britain “the safest place in the world to be online,” but campaigners fear it will erode free speech and privacy. Their prime concern involves the threat to end-to-end encrypted (E2EE) messenger apps. Under the mooted measures, telecoms regulators could force platforms to scan through private messages for illegal content. Greetings, humanoids