|
|||||
From the outside looking in, the life of a content creator is enviable. Shopping, jet-setting, star-studded events, all documented for their audience of thousands. But new research tells a different story. A study by Creators 4 Mental Health, conducted in partnership with Lupiani Insights & Strategies and sponsored by Opus, BeReal, Social Currant, Statusphere, and the nonprofit AAKOMA Project, spoke to more than 500 full- and part-time creators across North America about their work, mental health, and well-being. One in ten creators reported having suicidal thoughts tied to their work. That rate is nearly double the national average of 5.5%, according to the National Institutes of Health. Only 8% of creators described their mental health as excellent. For those who have been in the industry for more than five years, that number drops to 4%. The report found that 65% experience anxiety or depression related to their work, and 62% feel burned out. Rather than getting better over time, this only gets worse. Those who have worked five years or more report the highest rates of burnout, stress, and financial instability. Content creation is a numbers game. Yet those who check analytics obsessively also have significantly worse emotional well-being scores. Of those surveyed, 65% said they obsess over content performance, and 58% said their self-worth declines when content underperforms. Likes, views, and engagement directly correlate to how much money content creators can make, either through creator funds or negotiating brand deals. However, nearly 69% of creators said their income is unpredictable or inconsistent, a factor that also strongly correlates with poor mental health outcomes such as anxiety and depression. Far from the cushy work life some would imagine, burnout impacts creators almost as much as the wider U.S. population. The difference is that creators often face these challenges without access to any kind of specialized mental healthcare or workplace benefits. Creators are the new workforce of the digital age, doing the work of entire teams without support and protections, says Shira Lazar, Emmy-nominated creator and founder of Creators 4 Mental Health. This study is a wake-up call for platforms, brands, and policymakers to treat creator mental health as a workforce issue, not a personal problem. As much as creators complaints about the industry are often met with calls to quit or get a real job, content creation as a career path isnt going anywhere. In fact, the creator economy is growing rapidly, expected to nearly double in value to $480 billion by 2027, according to Goldman Sachs. Instead, change has to start with the platforms and brands that rely on content creators labor. Two-thirds of those surveyed said they want income stability tools built into social media platforms; 59% said they want transparent pay rates from brands. These results are a clear call to action for brands, platforms, nonprofits, and creators themselves, says Lazar. Creators are suffering as a result of their work, and something has got to give.
Category:
E-Commerce
Discovering that a colleague with the same job title is earning more than you is never fun, though it is quite common. According to a global survey of 1,850 workers by résumé building platform Kickresume, 56% have discovered that someone with the same job at their company is earning more than them, and another 24% have their suspicions. People are much less willing to discuss their salaries than we thought they would betheres still quite a stigma around it, says Kickresumes head of content Martin Poduska, who helped conduct the study. The weirdest thing is that we didnt identify a good reason for it. Poduska explains that compensation is far from a precise science, and that keeping the topic taboo only works to the benefit of the employer. The secrecy that surrounds it prevents organizations from coming up with more effective or more transparent ways of rewarding people, he says. In recent years, there have been efforts to mandate wage transparency in certain cities and states. For example, California, Washington, New York, Maryland, Colorado, and Rhode Island have had pay transparency laws on the books for years, and a handful moreincluding Illinois, Massachusetts, Minnesota, New Jersey, and Vermontadded them this year. Calls for more robust pay transparency have even gone viral on TikTok, and the Kickresume survey suggests Gen Zers and millennials are much more willing to talk about their compensation than Gen Xers and boomers. With more people sharing salary information, the research suggests many wont be happy with what they learn. Heres what to do when you discover a colleague is making more for the same job. Dont assume the worst Not everyone who found out that a colleague with the same job title was outearning them took issue with it. In the Kickresume survey, about 40% didnt really care what others were making, though the rest did. That includes 45% of women compared to just 33% of men, which may not be surprising given the gender wage gap. But that could be because there are a lot of reasons why two people with the same title may get paid differentlyand that any pay discrepancies could be unintended, or simply reflect nuances in talent and market trends. These reasons could range from résumé points, like education and experience, to differences in their responsibilities, even if they share a job title. Plus, those who are hired in a more competitive talent market also typically have more bargaining power than those who are hired in slower economic periods. I think that people assume that companies have it all figured out in terms of jobs and titles and career paths, but it’s really not that neat and clean, says career coach Caroline Ceniza-Levine. Even if a company doesn’t do it deliberately, there’s so many opportunities for inequities to develop in compensation, and no one’s going to advocate for your salary more than you will. So you might as well pay attention. Take a breath, and do your homework Discovering that someone with the same job title is earning more can provoke a lot of emotions, but a heated confrontation is unlikely to resolve the issue. You dont want to react the moment you find out, says Andres Lares, managing partner at Shapiro Negotiations Institute, which offers negotiation consulting and training services. You want to take some time to digest it, and that also gives you time to find some objective information. Lares explains that those emotions are best channeled into research about market rates for your role. That prepares you to have these conversations from a place of knowledge, he says. The more you do that, the less reactionary and emotional you are, and the more objective you are when you approach [your manager]. Approach with caution While there are wrong moments to confront your managerlike immediately after finding out someone is earning morethere may never be a right time. It can be very easy to stall forever waiting for the right time, and the right time will really never happen, says Lares. There’s always going to be excuses not to do it. If you want to talk to your boss about your compensation as it compares to your colleagues, Lares suggests scheduling an in-person appointment or bringing it up during a regularly scheduled one-on-one. Ask questions Rather than opening the conversation with accusations and demands, Lares recommends starting with questions. Sit down with your boss and ask about pay structures. How does it work? How do you come up with the pay structures for each person on your team? How do I compare in my compensation with others in the role? Where does my performance land compared to my colleagues? What would set me up best to increase my compensation? he says. Not only are you getting valuable information and seeing a more complete picture, but they can see that you’re approaching this with empathy. Test the market, carefully The most direct way to understand what youre worth is to test the market yourself. Even if youre not ready to jump ship, Vivian Garcia-Tunon, founder of executive coaching, leadership development, talent strategy, and advisory services provider VGT People Advisory, says sending out a few applications may be useful, as long as your negotiation doesn’t become an ultimatum. Probably eight out of 10 people will go test the market and see if they can get a job offer and then have the conversation with their manager, she says. It’s a strategy that brings the individual more confidence. But there’s a risk associated with it, which is that if you use it as a negotiation strategy, you have to be willing to walk. That other offer, in other words, may be a card you want in your back pocket heading into the negotiations, but not necessarily one you want to play. If youre seriously considering leaving, you can put that offer on the table, Garcia-Tunon says. If youre trying to use it to get an increase, you can position it in the conversation as another piece of information. Be patient Just because youre walking into your bosss office to talk about a raise doesnt mean youre going to walk out with a higher salary. Those decisions rarely happen on the spot, and may require conversations with other stakeholders, like human resources, accounting, and leadership teams. Sometimes your manager agrees with you, but they then have to go higher up, says Ceniza-Levine. One thing that I’ve actually seen with a lot of people is that they have this initial conversation with their manager, the manager promises them something, and then nothing happens. Ceniza-Levine expains that your salary will never be as pressing to anyone else, and whether intentionally or not, it can take a long time for managers to follow up. Be prepared to have multiple conversations, check in on what is happening, and leave a paper trail, she says. Send an email saying, thank you so much for meeting with me, as discussed youre going to talk to senior leader X about a merit raise for me, and then we can schedule another meeting.
Category:
E-Commerce
OpenAI watchers have spotted something curious over the last week. References to GPT-5.1 keep showing up in OpenAIs codebase, and a cloaked model codenamed Polaris Alpha and widely believed to have come from OpenAI randomly appeared in OpenRouter, a platform that AI nerds use to test new systems. Nothing is official yet. But all of this suggests that OpenAI is quietly preparing to release a new version of their GPT-5 model. Industry sources point to a potential release date as early as November 24. If GPT-5.1 is for real, what new capabilities will the model have? As a former OpenAI Beta testerand someone who burns through millions of GPT-5 tokens every monthheres what Im expecting. A larger context window (but still not large enough) An AI models context window is the amount of data (measured in tokens, which are basically bits of words) that it can process at one time. As the name implies, a larger context window means that a model can consider more context and external information when processing a given request. This usually results in better output. I recently spoke to an artist, for example, who hands Googles Gemini a 300-page document every time he chats with it. The document includes excerpts from his personal journal, full copies of screenplays hes written, and much else. This insanely large amount of context lets the model provide him much better, more tailored responses than it would if he simply interacted with it like the average user. This works largely because Gemini has a 1 million token context window. GPT-5s, in comparison, is relatively puny at just 196,000 tokens in ChatGPT (expanded to 400,000 tokens when used by developers through the companys API). That smaller context window puts GPT-5 and ChatGPT at a major disadvantage. If you want to use the model to edit a book or improve a large codebase, for example, youll quickly run out of tokens. When OpenAI releases GPT-5.1, sources indicate that it will come with a 256,000 token context window when used via the ChatGPT interface, and perhaps double that in the API. Thats better than todays GPT-5, to be sure. But it still falls far short of Geminiespecially as Google prepares to make its own upgrades. OpenAI could make a surprise last-minute upgrade to 1 million tokens. But if it keeps the 256,000 token context window, expect plenty of grumbling from the developer community about why the window still isnt big enough. Even fewer hallucinations OpenAIs GPT-5 model falls short in many ways. But one thing its very good at is providing accurate, largely hallucination-free responses. I often use OpenAIs models to perform research. With earlier models like GPT-4o, I found that I had to carefully fact-check everything the model produced to ensure it wasnt imagining some new software tool that doesnt actually exist, or lying to me about myriad other small, crucial things. With GPT-5, I find I have to do that far less. The model isnt perfect. But OpenAI has largely solved the problem of wild hallucinations. According to the companys own data, GPT-5 hallucinates only 26% of the time when solving a complex benchmark problem, versus 75% of the time with older models. In normal usage, that translates to a far lower hallucination rate on simpler, everyday queries that arent designed to trip the model up. With GPT-5.1, expect OpenAI to double down on its new, hallucination-free direction. The updated model is likely to do an even better job at avoiding errors. Theres a cost, though. Models that hallucinate less tend to take fewer risks, and can thus seem less creative than unconstrained, hallucination-laden ones. OpenAI will likely try to carefully walk the link between accuracy and creativity with GPT-5.1. But theres no guarantee theyll succeed. Better, more creative writing In a similar vein, when OpenAI released their GPT-5 model, users quickly noticed that it produced boring, lifeless prose. At the time, I predicted that OpenAI had essentially given the model an emotional lobotomy, killing its emotional intelligence in order to curb a worrying trend of the model sending users down psychotic spirals. Turns out, I was right. In a post on X last month, Sam Altman admitted that We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. But Altman also said in the post now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases. That process began with the rollout of new, more emotionally intelligent personalities in the existing GPT-5 model. But its likely to continue and intensify with GPT-5.1. I expect the new model to have the overall intelligence and accuracy of GPT-5, but with a personality to match the emotionally deep GPT-4o. This will likely be paired with much more robust safeguards to ensure that 5.1 avoids conversations that might hurt someone who is having a mental health crisis. Hopefully, with GPT-5.1 the company can protect those vulnerable users without bricking the bots brain for everyone else. Naughty bits If youre squeamish about NSFW stuff, maybe cover your ears for this part. In the same X post, Altman subtly dropped a sentence that sent the Interne into a tizzy: As we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults. The idea of Americas leading AI company churning out reams of computer-generated erotica has already sparked feverish commentary from such varied sources as politicians, Christian leaders, tech reporters, and (judging from the number of Upvotes), much of Reddit. For their part, though, OpenAI seems quite committed to moving ahead with this promise. In a calculus that surely makes sense in the strange techno-Libertarian circles of the AI world, the issue is intimately tied to personal freedom and autonomy. In a recent article about the future of artificial intelligence, OpenAI again reiterated that We believe that adults should be able to use AI on their own terms, within broad bounds defined by society, placing full access to AI on par with electricity, clean water, or food. All thats to say that with the release of GPT-5.1 (or perhaps slightly after the release, so the inevitable media frenzy doesnt overshadow the new models less interesting aspects), the guardrails around ChatGPTs naughty bits are almost certainly coming off. Deeper thought In addition to killing GPT-5s emotional intelligence, OpenAI made another misstep when releasing GPT-5. The company tried to unify all queries within a single model, letting ChatGPT itself choose whether to use a simpler, lower-effort version of GPT-5, or a slower, more thoughtful one. The idea was nobletheres little reason to use an incredibly powerful, slow, resource-intensive LLM to answer a query like, Is tahini still good after one month in the fridge? But in practice, the feature was a failure. ChatGPT was no good at determining how much effort was needed to field a given query, which meant that people asking complex questions were often routed to a cheap, crappy model that gave awful results. OpenAI fixed the issue in ChatGPT with a user interface kludge. But with GPT-5.1, early indications point to OpenAI once again bifurcating their model into Instant and Thinking versions. The former will likely respond to simple queries far faster than GPT-5, while the latter will take longer, chew through more tokens, and yield better results on complex tasks. Crucially, it seems like the user will once again be able to explicitly choose between the two models. That should yield faster results when a query is genuinely simple, and a better ability to solve complicated problems. OpenAI has hinted that its future models will be capable of making very small discoveries in fields like science and medicine next year, with systems that can make more significant discoveries coming as soon as 2028. GPT-5.1 will likely be a first step down that path. An attempt to course correct Until OpenAI formally releases GPT-5.1 in one of its signature, wonky livestreams, all of this remains speculative. But given my history with OpenAIgoing back to the halcyon days of GPT-3these are some changes Im expecting when the 5.1 model does go live. Overall, GPT-5.1 seems like an attempt to correct many of the glaring problems with GPT-5, while also doubling down on OpenAIs more freedom-oriented, accuracy-focused approach. The new model will likely be able to think, (ahem) flirt, write, and communicate better than its predecessors. Whether it will do those things better than a growing stable of competing models from Google, Anthropic, and myriad Chinese AI labs, though, is anyones guess.
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||