Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2025-11-20 07:30:00| Fast Company

Colleagues are a critical part of what makes your work experience enjoyable and meaningful. You interact with your colleagues and (in the best of cases) create a neighborhood of peers that you can rely on both to push the work forward and to share the joys and tribulations of the workday. Thats why annoying colleagues can be a particular thorn. When you have a peer at work that you dont want to deal with, it disrupts the flow of your day and diminishes your intrinsic enjoyment of work.  So, what can you do to deal with annoying coworkers? A lot of that depends on what is making them annoying. Here are a few possibilities. Missing social norms One thing that can make a colleague annoying is that they just dont understand the social norms of the office. This is particularly likely to be true of people who are new to your organization and especially those who are new to working in general. Also, these social norms can be very hard to pick up when the company works remotely. You might want to help these colleagues get acclimated to the workplace. Talk to them about what colleagues expect in the organization. Offer to give them feedback on the interactions you witness in meetings or group gatherings. Give them a heads-up about upcoming situations. The idea here is that annoying colleagues are particularly annoying when you feel like there is nothing you can do to avoid them. By becoming a proactive part of the solution, you are giving yourself some agency that will make your colleague feel less like a rock in your shoe. Lack of trust Some colleagues are annoying, because you flat-out dont trust them. You suspect that they are using any information they obtain to get ahead at the expense of others. Perhaps they have the ear of leadership and tend to badmouth members of the team. They might even try to take more credit for projects than they deserve. This is a hard one, because you have to be able to engage with your peers to get your work done. For one thing, if you witness a colleague doing something that undermines your trust in them, find a time to talk with them. It is possible that they are insecure and doing some of the things they do to feel successful. They may not even realize that others have picked up on what theyre doing. The aim is to try to convince your colleague that playing with the team is likely to help them to be more successful than undermining the team. If you do have this conversation, focus on the observable facts without implying a motive. Tell them what you saw them do and allow them to talk to you about why. Hopefully, the conversation will improve that colleagues future behavior. Of course, if they deny having done anything wrong, it reinforces your lack of trust. If you do have a colleague who is truly untrustworthy, try to avoid engaging with them more than necessary. Hopefully, their supervisor will have some sense that this person isnt trustworthy and will provide some feedback to correct their behavior. Machiavellian individuals in particular may treat their peers poorly, but suck up to leadership. Still, your best bet is to steer clear and focus your efforts on your trusted colleagues. Social awkwardness and neurodivergence Some people are just socially awkward. They mean well, but they dont pick up on the social cues that others use to know that a social interaction isnt going well or they should leave someone alone. Some (though not all) of these socially awkward individuals may be on the autism spectrum. There are two things to do here: First, give some grace. If youre fortunate enough to be socially skilled, you may not realize how hard it is to be socially awkward. Everyone wants to feel some connection to their colleagues, and your socially atypical and neurodivergent colleagues have a particularly hard time sustaining those connections. Being a good colleague and friend is going to improve their work experience (and yours). As you befriend these colleagues, talk with them about whether they would appreciate you letting them know if theyre being a bother. Often, they will value getting more direct feedback about when an engagement has gone awry. That way, you can help them and also redirect interactions before they become annoying. AITA? If several colleagues are being annoying, it could be a run of bad luck, but there is also a significant chance that the problem is you.  Reflect a bit on the way you engage with your colleagues. Are there things youre doing that may rub them the wrong way? If you cant figure it out, find a colleague you think you get along with well, and ask.  If you do figure out (or are told) that you are driving your colleagues nuts, then sit down with your colleagues individually and apologize. Discuss the situation and assure them that you want to be a good colleague and are working to improve. Conversations like that can go a long way toward repairing your relationships with your peers.


Category: E-Commerce

 

LATEST NEWS

2025-11-20 07:00:00| Fast Company

Business leaders are scrambling to understand the fast-moving world of artificial intelligence. But if companies are struggling to keep up, can todays business schools really prepare students for a new landscape thats unfolding in real time out in the real world? Stanford University thinks it might have the answer. At its Graduate School of Business, a new student-led initiative aims to arm students for a future where AI is upending in ways that are still unfolding. The program, called AI@GSB, includes hands-on workshops with new AI tools and a speaker series with industry experts. The school also introduced new courses around AIincluding one called AI for Human Flourishing, which aims to shift the focus from what AI can do, to what it should do.  But Sarah Soule, a longtime organizational behavior professor who became dean of the business school this year, told Fast Company that preparing students for this brand-new work environment is easy to say, harder to do. Especially given how quickly AI is changing every function of every organization, she says.  So the school hopes to lean on its network of well-connected alumni, as well as its location in Silicon Valley, the heart of the AI boom, to lead business schools not just into a future where AI knowledge will be necessarybut in the present, where it already is. [Photo: SGSB] It would not be easy for me as the new dean to just come in and mandate that everybody begin teaching AI in whatever their subject matter is, Soule said, explaining that that approach likely would fail. In a conversation with Fast Company, the dean shared more about what she hopes will work, and how she plans to train the next generation of leaders for an AI-powered world. This interview has been lightly edited for clarity. Many business schools are adding AI courses. But it sounds like youre thinking of AI as less of an add-on, and more like a core part of the schools DNA going forward. How do you make that distinction?  I think it has to be [a core part]. Developing a very holistic leadership model, alongside all the offerings in AI, is going to allow usI hopeto think about the questions of ethics and responsibility, and the importance of human beings and human connection, especially in an AI-powered organization.  AI is going to change the future of work completely. So having those two parallel themes at the same time is going to be critical. What does ethical, responsible AI mean to you?  HR comes to mind right away. Im thinking about privacy concerns: What do we need to be worried about? If were outsourcing scans of résumés and so on to algorithms and agents, do we need to worry about privacy?  I also think about: What does the world look like if a lot of entry-level jobs begin to disappear? How do we think responsibly about reskilling individuals for work that will enable AI?  I dont think we have the answers to these questions, but Im really glad that we as a business school are going to beand have beenasking these questions. The new AI initiative is student-led. But what is the school doing to train faculty to better understand how they can, or should, teach about AIor use AI in their classes? Implementing this has been a mixed bag for a lot of universities. We have a teaching and learning hub here that has very talented staff [members] who are pedagogical experts and who are offering different kinds of sessions on AI. So thats of course been helpful.  But one of the most gratifying things to see is how faculty are talking to one another about their researchto see them really jazzed about how they’re using AI in the classroom, and sharing speakers that they’re going to bring in, and thinking about new case studies to write together. Its really fun to see the buzz amongst the faculty as they navigate this. Many, if not most, of our faculty are using AI in their research. I think because theyre becoming so comfortable with AI, theyre genuinely excited about teaching AI noweither teaching content about AI, or bringing AI into the pedagogy. I’ll give you an example. In one particular class, the faculty member essentially created a GPT to search all of the management journals and to help answer common managerial questions and dilemmas. So it’s an evidence-based management tool that the students can use. They could say, What’s the optimal way to set up a high-functioning team? And it will search through the journals and give an evidence-based answer.  One of Stanford GSBs most popular courses is Interpersonal Dynamics, known as the Touchy Feely class. Do you think teaching skills like emotional intelligence as an aspect of leadership becomes even more important in an AI-dominated world?  Absolutely. Touchy Feely is an iconic class. Even though it’s an elective, nearly every student takes it; it transforms people’s lives, and they love this course. It focuses on an important facet of leadership: self-awareness. But that’s only one piece. We also have courses that get students to think about a second facet of leadership, which is perspective-taking: the ability to ask very good questions, and to listen really well to others to understand where they’re coming from. So, self-awareness and perspective-taking are part of the leadership model. The third thing: We have a wonderful set of classes on communications, not just about executive presence and executive communications, but classes that focus on nonverbal communication and written communication.  The last two facets of our leadership model are: critical and analytical decision makinghaving the judgment and wisdom to make the kinds of decisions that leaders always have to makeand contextual awareness to think about the system in which they’re embedded. Not just to understand it, but to navigate it, and to have the will to try to change it if it needs to be changed.&nbs; All of those dimensions of leadership are going to be more and more important in the coming years with AI. So many of the rote tasks and analysis will be being done pretty wellmaybe better than humansby AI.  But we are going to need people who can lead othersand lead them well, and lead them in a principled and purposeful fashion.


Category: E-Commerce

 

2025-11-20 01:29:00| Fast Company

As companies adopt AI, the conversation is shifting from the promise of productivity to concerns about AIs impact on wellbeing. Business leaders cant ignore the warning signs. The mental health crisis isnt new, but AI is changing how we must address it. More than 1 billion people experience mental health conditions. Burnout is rising. And more people are turning to AI for support without the expertise of trained therapists. What starts as empathy on demand could accelerate loneliness. Whats more, Stanford research found that these tools could introduce biases and failures that could result in dangerous consequences. With the right leadership, AI can usher in a human renaissance: simplifying complex challenges, freeing up capacity, and sparking creativity. But optimism alone isnt a strategy. Thats why responsible AI adoption is a business imperative, especially for companies building the technology. That work is not easy, but its necessary. UNCLEAR EXPECTATIONS Weve seen what happens when powerful platforms are built without the right guardrails: Algorithms can fuel outrage, deepen disconnection, and undermine trust. If we deploy AI without grounding it in values, ethics, and governancedesigning the future without prioritizing wellbeingwe risk losing the trust and energy of the very people who would lead the renaissance. Ive seen this dynamic up close. In conversations with business and HR leaders, and through my work on the board of Project Healthy Minds, the signals are clear: People are struggling with unclear expectations around AI use, job insecurity, loneliness, uncertainty, and exhaustion. In a recent conversation with  Phil Schermer, founder and CEO of Project Health Minds, he told me, Theres a reason why professional sports teams and hedge funds alike are investing in mental health programs for their teams that enable them to operate at the highest level. Companies that invest in improving the mental health of their workforce see higher levels of productivity, innovation, and retention of high performers. 5 WAYS TO BUILD AN AI-FIRST WORKPLACE THAT PROTECTS WELLBEING Wellbeing should be at the core of the AI enablement strategy. Here are five ways to incorporate it. 1. Set clear expectations Employees need to understand how to work with AI and that their leaders have their back. That means prioritizing governance and encouraging experimentation within safe, ethical guardrails. Good governance builds trust, and trust is the foundation of any successful transformation. Investing in learning and growth sends a powerful message to employees: You belong in the future were building if youre willing to adapt. We prioritize skill building through ServiceNow University so every employee feels confident working with AI day-to-day. In a conversation with Open Machine CEO and AI advisor Allie K. Miller, she told me that we need to redefine success in jobs by an employees output, value, and quality as they work with AI agents. This means looking at things like business impact and creativity, not just processes or tasks completed. 2. Model healthy AI behavior AI implementation is a cultural shift. If we want employees to trust the technology, they need to see leaders and managers do the same. That modeling starts with curiosity. Employees dont need to be AI experts from day one, but they need to show a willingness to learn. Set norms around when, why, and how often teams engage with AI tools. Ask questions, share experiments, and celebrate use cases where AI saved time or sparked creativity. AI shouldnt be an opt in for teamsit should be part of how we work, learn, and grow. When leaders use AI thoughtfully, employees are more likely to follow suit. 3. Pulse-check employee sentiment consistently To design meaningful wellbeing programs, leaders must ground analysis in data, continuously improve, and build for scale. That starts by surveying employees to track sentiment, trust, and AI-related fatigue in real time. Then comes the harder part: acting on the data to show employees theyre seen and supported. Leaders should ask: Are we tailoring wellbeing strategies to the unique needs of teams, regions, and roles? Are we embedding empathy into our platforms, workflows, and automated tasks? Are our AI tools safe, unbiased, and aligned to our values? Are we making mental health a routine part of manager check-ins? According to Schermer, The organizations making the biggest strides are the ones treating wellbeing data like commercial data: measured frequently, acted on quickly, and tied directly to outcomes. 4. Focus on connection, keeping people at the center  AI should not replace professional mental healthcare or real-world connections. We must resist the urge to scale empathy through bots alone. The unique human ability to notice distress, empathize, and escalate is largely irreplaceable. Thats why leaders should advocate for human-first escalation ladders and align their policies to the World Health Organizations guidance on AI for health. Some researchers are exploring traffic light systems to flag when AI tools for mental health might cross ethical or personal boundaries. AI adoption is a human shift, so people leaders need to take responsibility for AI transformation. Thats why my chief people officer role at ServiceNow evolved to include chief AI enablement officer. Todays leadership imperatives include reducing the stigma around mental health, building confidence in AI systems, creating space for open human connection, and encouraging dialogue about digital anxiety, loneliness, or job insecurity. 5. Champion cross-sector collaboration We need collaboration across industries and leadership rolesfrom tech to healthcare, from HR professionals to policymakersto create systems of care alongside AI. The most effective strategies come from collective action. Thats why leaders should partner with coalitions to scale access to care, expand AI literacy, and advocate for mental health in theworkforce. These partnerships can help us shape a better future for our people. THE BOTTOM LINE: AI MUST BE BUILT TO WORK FOR PEOPLE The future of work should be defined by trust, transparency, and humanity. This is our moment to lead with empathy, design with purpose, and build AI that works for people, not just productivity. Jacqui Canney is chief people and AI enablement officer at ServiceNow.


Category: E-Commerce

 

Latest from this category

20.11Why vibe coding is a leadership problem, not a technical one
20.11The best new postage stamps coming out in 2026
20.11AI CEOs are promising all-powerful superintelligence. Government insiders have thoughts 
20.11The toxicity of the customer is always right
20.11How synthetic data trains AI to solve real problems
20.11Equinox has too many next big bets to count
20.11NAR says the median age of first-time homebuyers in 2025 is 40, up from 28 in 1992but can we trust the data?
20.11Cryptos path to legitimacy depends on the industry itself, not just politicians
E-Commerce »

All news

20.11The best new postage stamps coming out in 2026
20.11Why vibe coding is a leadership problem, not a technical one
20.11NAR says the median age of first-time homebuyers in 2025 is 40, up from 28 in 1992but can we trust the data?
20.11Equinox has too many next big bets to count
20.11How synthetic data trains AI to solve real problems
20.11The toxicity of the customer is always right
20.11AI CEOs are promising all-powerful superintelligence. Government insiders have thoughts 
20.11Cryptos path to legitimacy depends on the industry itself, not just politicians
More »
Privacy policy . Copyright . Contact form .