There was plenty to discuss over the past week at Gloo HQ, from mobile hackers to hair hackers. Is AI going to help us all be friendlier on Instagram and does the live action The Lion King make anyone else think of AI?
Reset your password?
When will companies start taking cybersecurity more seriously? Every week we seem to see another story about a company that is still getting things wrong. Last week 7-Eleven announced that it was suspending mobile payments in Japan after hackers stole $500,000. How were they able to do this? Through a well-known vulnerability.
7-Eleven offered customers the option to scan the barcode of a product on their phone and pay for it with the card connected to their account. The system also allowed users to send a password reset email to an unlinked account—yes, you guessed it, this went straight to the hackers. The hackers then updated the passwords and went on a spending spree. No matter how much we write about cybersecurity, we continue to see companies making the same mistakes—one of our blogs even discusses this very vulnerability—perhaps 7-Eleven should start reading more of our content.
We love connected devices as much as the next person—but there’s a limit, and it looks like we’ve reached it with the introduction of the world’s first Bluetooth hair straighteners. The straighteners connect to an app on your phone that allows you to control the heat and style settings remotely, as well as being able to switch the straighteners on and off. The remote switch off would be pretty handy if you’re on the bus and realise you left them on—except that the app only works when you’re within Bluetooth range.
The limited range isn’t even the biggest problem with these straighteners. No, the biggest problem is that they’re easily hacked. Security researchers at Pen Test Partners tested the product and found that the app contains no authentication so anyone within range can connect and start adjusting the settings. It’s unlikely that the app will contain sensitive information, but when an estimated 650,000 house fires in the UK are started by hair straighteners, the damage caused by hackers could be significant.
Are you sure you want to post that?
Instagram has introduced a new feature in an attempt to stop people posting harmful comments. The feature uses machine learning to identify harmful language and prompts users to change it before publishing. The aim is to use this instead of blocking accounts, which Instagram acknowledges can often make bullying situations worse. But how effective is this going to be?
If you’re determined to post comments with offensive and belittling language, is an automated prompt going to change your mind? And what about repeat offenders? Will the system be smart enough to know the people that have rejected the prompt and posted harmful comments anyway? It’s certainly an interesting concept—we’ll be interested to see how much of an impact it has on the site.
King of the uncanny valley
We’re all getting ready to relive our childhood at Gloo HQ with the release of the live action The Lion King. While we’re excited to see how far technology has come (the trailers have looked phenomenal), we can’t help but agree with David Ehrlich at IndieWire—just because the technology exists to make this movie doesn’t mean they should have. We love technology and the things it’s enabling us to do, but are live action lions singing just a little bit too much?
The term “uncanny valley” has been used in many discussions we’ve seen of the new movie. The term refers to the unsettling feeling you get when things look human and act human, but there’s something not quite right about them. Just like many applications of AI, from natural language chatbots to realistic deepfakes, the live action Lion King is incredibly convincing—but we’re still a little cautious about it. Let’s see how opinions change when the movie is released on Friday.
Posted by Katie on 15 July 2019