|
Lyft just got a new logo, but you probably didnt notice it.Over the past few weeks, Lyft has quietly rolled out an updated logo, broadened color palette, and custom typeface on its app and across its social media platforms. The new look, designed by the branding studio Koto, is meant to serve as a natural progression of the brands existing identity, injecting it with a subtle boost of structure and maturity.According to Arthur Foliard, executive creative director at Koto, the changes come at a pivotal moment for Lyft, which is currently testing out an expansion into autonomous driving and slowly gaining on its main competitor (and dominant industry player), Uber. In an interview with Fast Company this May, Lyft CEO David Risher noted that, since he joined Lyft in 2023, the company has brought its market share in the U.S. from 26% to 31%.Lyfts blink-and-youll-miss-it new look follows a recent trend in which larger brands like Walmart and Adobe have skipped the big rebrand (which ruled the branding world several years ago) in favor of an understated-yet-practical refresh.Lyft’s previous logo (top) and new version (bottom) [Image: Koto/Lyft]Lyft’s new lookLooking at Lyfts new logo next to its old one is a bit like a game of spot the difference. But one key detail in the logo immediately jumps out: The playful path connecting the f and t characters has been severed. “The Lyft logo is one of the most recognizable in tech, so we approached it with a lot of care, Foliard says. The original had a ton of character, but it wasnt optimized for the way the brand shows up today, especially in smaller, digital contexts. The ligature between the f and t in particular caused legibility issues and sometimes felt overly stylized.[Image: Koto/Lyft]To preserve the visual cues that make Lyfts logo recognizable, Foliards team broke up the ligature but kept the spirit of the marks swooping, bold letters the same. Besides the newly separated f and t characters, the rest of the logo has been just slightly slimmed down and realigned. We adjusted the weight, spacing, and proportions to make the wordmark feel more confident and contemporary, less ornamental, more intentional, Foliard says. The result is a logo that feels mature without being cold. It still has that signature Lyft charm, but now it holds up wherever it appears, from app icons to car decals to national campaigns.That focus on versatility was also applied to Lyfts color palette and typography. Lyft Pink, the companys signature shade of neon purplish-pink, has been given a more focused role. Whereas Lyft Pink was previously used more wholesale across the branding, Koto built out an accompanying palette of off-whites, deep pinks, and neutrals to keep the bright hue reserved for the most important moments of the brand, like the logo.[Image: Koto/Lyft]One feature of the branding that was entirely overhauled is its typography. Foliard says Lyft was previously using several functional typefaces that while serviceable, didn’t quite capture Lyft’s warmth and humanity. So, in collaboration with the type design studio NaN, Kotos team created a custom typeface for Lyft called Rebel Sans. Its a classic-looking sans serif, available in a range of weights, that echoes the logo with flourishes like a half-smile shape in the y character.We wanted it to feel like it had been made by people, for people, Foliard says. It features distinct humanist detailsslight line weight variation, gentle curves, and subtle flaresthat bring a sense of the human hand into both display and text. Underneath it all, its grounded in a more geometric structure, giving it the clarity and sophistication needed to scale across the brand.[Image: Koto/Lyft]Lyft gets the baby Botox treatmentLyfts spruced-up identity is the latest in a series of similar approaches from other major brands. If the early 2020s were the heyday of the major rebrand, and 2024 was the era of the dialed-back brand refresh, then 2025 is currently seeing an even more minimal wave of baby Botox branding. This year, several brands have moved away from headline-grabbing overhauls in favor of small updates that are intended to fly under the radar. For an ultra-recognizable brand like Walmart, this approach is meant to avoid alienating the customer by shedding too much core brand affinity at once. In January, Walmart introduced its biggest branding update in two decadesan update that, rather than actually replacing any assets, instead opted to simply spruce up the existing look with brighter colors and chunkier shapes. Other brands, like Amazon (which also worked with Koto on its logo touch-up) and Google, have similarly rolled out new logos in rcent months that would likely require a trained eye to spot. In May, Mother Design studio, which gave Adobes logotype a subtle facelift, encapsulated this trend by explaining that its goal was to create an update that looks as if it’s always been there. For Lyft, Foliard says, this new branding wasn’t about changing who Lyft is, but rather about sharpening what was already there.The timing felt right to strike that balance between evolution and preservation, ensuring the brand could grow with the business while keeping the heart and humanity that made it iconic in the first place, Foliard says.
Category:
E-Commerce
A stable “release” version of Apple’s iOS 26 is due in September, but you can now try an in-progress version, called the public beta. It previews a revamped interface and new features in apps like Messages and Phone (both with spam filtering), Camera, Wallet, and especially CarPlay. Models starting with the iPhone 15 Pro also get upgrades to the Apple Intelligence AI suite, including live translation, improved image creation, and the ability to search visually across apps. The translucent Liquid Glass interface is seeing a bit of a revival in areas such as Notification Center, after Apple toned it down in earlier betas. Is the iOS26 public beta safe to install? The public beta follows four developer betas meant for app creators (although others tend to install betas out of curiosity). Adding the word public doesn’t mean this beta is without risks. To get it, you have to accept an agreement that absolves Apple of responsibility for any problems it may cause. This includes brickingrendering the phone inoperable. It’s safest to test the public beta on a spare device, which Apple’s beta site strongly recommends. You can lower the risk to an old model or your current one by first backing up your iPhone and learning how to unbrick and roll it back to the latest release version of iOS 18. We’ll walk you through how to do that further down. These tips may also help with glitches you may encounter in the release version. How to get iOS 26 public beta First, check whether your iPhone supports iOS 26. Apple’s list includes models back to 2019s iPhone 11 and 2020s SE (2nd generation), both using the A13 Bionic chip. If you have an iPhone X or earlier model, it may show an option to download iOS 26, but won’t let you. Getting the beta is easy: Visit the Apple Beta site, click Sign Up, and log in with the same Apple ID your iPhone uses. Signing up provides access to all Apple OS 26 betas: iOS, iPadOS, macOS, watchOS, and tvOS, plus HomePod software. Does installing iOS 26 public beta void my warranty? According to Apple’s FAQ, installing the beta wont void your hardware warranty, although you will have to restore to a stable OS version before getting service. Apple Beta Software Program login screen for signing in with Apple ID But within the the roughly 5,500-word Apple Beta Software Agreement is the clause: “APPLE SHALL NOT BE RESPONSIBLE FOR ANY COSTS, EXPENSES OR OTHER LIABILITIES YOU MAY INCUR . . . INCLUDING BUT NOT LIMITED TO ANY DAMAGE TO ANY EQUIPMENT, SOFTWARE OR DATA. (Fast Company has asked Apple to clarify whether “equipment” would include the iPhone hardware and will update if we get an answer.) The agreement does say that the company may provide support through its beta program, at Apples option. TL;DR: Dont count on help, and take your own precautions. How to back up your iPhone before installing iOS 26 public beta Before you do anything, first back up your iPhone. The easiest way is online: Go to Settings, then click your name, iCloud > iCloud Backup. Apple provides 5GB of free storage. Paid tiers start at 50GB for $0.99 per month and 200GB for $2.99. You can also back up to a computer over USB. In recent versions of macOS: Open Finder, click your iPhone in the left panel, then click Back Up Now. Windows or macOS Mojave (10.14) and earlier should use iTunes. (Yes, it’s still out there.) Click the Device button near the top left of the iTunes window, then Click Summary > Back Up Now. Before installing iOS 26 public beta, note these backup options in your macOS Finder. How to download and install iOS26 public beta on your iPhone Now comes the main event. On your iPhone, click Settings > General > Software Update. Click Beta Updates to see multiple options on the next screen, including Off and possibly betas for several versions of iOS. Click to place a check mark next to iOS 26 Public Beta. Then tap the back button and click Update Now. iPhone screenshots showing how to select iOS 26 Public Beta from the Beta Updates menu How to roll back from iOS 26 beta In the event that iOS 26 does brick your phone, a new tool called Recovery Assistant may automatically activate, allowing you to monitor the process wirelessly from another Apple device. If Recovery Assistant doesnt appear or work, try the old-fashioned way: Connect the iPhone to your computer over USB and open Finder or iTunes (depending on your computer operating system). Press and release the iPhone’s volume up button, then the volume down button. Then press and hold the side button until you see the Recovery Mode screen with cable and computer icons. If you dont see them, throw yourself at the mercy of Apple Support by contacting them online. MacOS Finder shows iPhone in Recovery Mode with the option to update or restore for iOS 26 beta installation. If you do, Finder or iTunes will show the message There is a problem with the iPhone that requires it to be updated or restored. Its best to select the Restore option, which erases the Phone and installs the latest public release of iOS. Then restore the deeted data and settings from your backup.
Category:
E-Commerce
Every CEO I know wants their team to use AI more, and for good reason: it can supercharge almost every area of their business and make employees vastly more efficient. Employee use of AI is a business imperative, but as it becomes more common, how can companies avoid major security headaches? Sift’s latest data found that 31% of consumers admit to entering personal or sensitive information into GenAI tools like ChatGPT, and 14% of those individuals explicitly reported entering company trade secrets. Other types of information that people admit to sharing with AI chatbots include financial details, nonpublic facts, email addresses, phone numbers, and information about employers. At its core, it reveals that people are increasingly willing to trust AI with sensitive information. This overconfidence with AI isn’t limited to data sharing. The same comfort level that leads people to input sensitive work information also makes them vulnerable to deepfakes and AI-generated scams in their personal lives. Sift data found that concern that AI would be used to scam someone has decreased 18% in the last year, and yet the number of people who admit to being successfully scammed has increased 62% since 2024. Whether it’s sharing trade secrets at work or falling for scam texts at home, the pattern is the same: familiarity with AI is creating dangerous blind spots. The Confidence Trap Often in a workplace setting, employees turn to AI to address a specific problem: looking for examples to round out a sales proposal, pasting an internal email to “punch it up,” sharing nonfinal marketing copy for tone suggestions, or disclosing product road map details with a customer service bot to help answer a complex ticket. This behavior often stems from good intentions, whether that’s trying to be more efficient, helpful, or responsive. But as the data shows, digital familiarity can create a false sense of security. The people who think they “get AI” are the ones most likely to leak sensitive data through it or will struggle to identify malicious content. Every time an employee drops nonpublic context into a GenAI tool, they areknowingly or nottransmitting business-sensitive data into a system that may log, store, or even use it to train future outputs. Not to mention, if a data leak were ever to occur, a hacker would be privy to a treasure trove of confidential information. So what should businesses do? The challenge with this kind of data exposure is that traditional monitoring won’t catch this. Because these tools are often used outside of a companys intranettheir internal software networkemployees are able to input almost any data they can access. The uncomfortable truth is that you probably can’t know exactly what sensitive information your employees are sharing with AI platforms. Unlike a phishing attack where you can trace the breach, AI data sharing often happens in the shadows of personal accounts. But that doesnt mean you should ban AI usage outright. Try to infer the scale of the problem with anonymous employee surveys. Ask: What AI tools are you using? For which tasks do you find AI most helpful? And what do you wish AI could do? While an employee may not disclose sharing sensitive information with a chatbot, understanding more generally how your team is using AI can identify potential areas of concernand potential opportunities. Instead of trying to track every instance retroactively, focus on prevention. A blanket AI ban isn’t realistic and puts your organization at a competitive disadvantage. Instead, establish clear guidelines that distinguish between acceptable and prohibited data types. Set a clear red line on what can’t be entered into public GenAI tools: customer data, financial information, legal language, and internal documents. Make it practical, not paranoid. To encourage responsible AI use, provide approved alternatives. Create company-sanctioned AI workflows for everyday use cases that don’t retain data or are only used in tools that do not use any inputs for AI training. Make sure your IT teams vet all AI tools for proper data governance. This is especially important because different account types of AI tools have different data retention policies. Furthermore, it helps employees understand the potential dangers of sharing sensitive data with AI chatbots. Encourage employee training that addresses both professional and personal AI risks. Provide real-world examples of how innocent AI interactions inadvertently expose trade secrets, but also educate employees about AI-powered scams they might encounter outside of work. The same overconfidence that leads to workplace data leaks can make employees targets for sophisticated fraud schemes, potentially compromising both personal and professional security. If you discover that sensitive information has been shared with AI platforms, act quickly, but don’t panic. Document what was shared, when, and through which platform. Conduct a risk assessment that asks: How sensitive was the information? Could it compromise competitive positioning or regulatory compliance? You may need to notify affected parties, depending on the nature of the data. Then, use these incidents as learning opportunities. Review how the incident occurred and identify the necessary safeguards. While the world of AI chatbots has changed since 2023, there is a lot we can learn from a situation Samsung experienced a few years ago, when employees in their semiconductor division shared source code, meeting notes, and test sequences with ChatGPT. This exposed proprietary software to OpenAI and leaked sensitive hardware testing methods. Samsung’s response was swift: they restricted ChatGPT uploads to minimize the potential for sharing sensitive information, launched internal investigations, and began developing a company-specific AI chatbot to prevent future leaks. While most companies lack the resources to build chatbots themselves, they can achieve a similar approach by using an enterprise-grade account that specifically opts out their accounts from AI training. AI can bring massive productivity gains, but that doesnt make its usage risk-free. Organizations that anticipate and address this challenge will leverage AI’s benefits while maintaining the security of their most valuable information. The key is recognizing that AI overconfidence poses risks both inside and outside the office, and preparing accordingly.
Category:
E-Commerce
All news |
||||||||||||||||||
|