Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2026-02-22 09:00:00| Fast Company

Public debate about artificial intelligence in higher education has largely orbited a familiar worry: cheating. Will students use chatbots to write essays? Can instructors tell? Should universities ban the tech? Embrace it? These concerns are understandable. But focusing so much on cheating misses the larger transformation already underway, one that extends far beyond student misconduct and even the classroom. Universities are adopting AI across many areas of institutional life. Some uses are largely invisible, like systems that help allocate resources, flag at-risk students, optimize course scheduling, or automate routine administrative decisions. Other uses are more noticeable. Students use AI tools to summarize and study, instructors use them to build assignments and syllabuses, and researchers use them to write code, scan literature, and compress hours of tedious work into minutes. People may use AI to cheat or skip out on work assignments. But the many uses of AI in higher education, and the changes they portend, beg a much deeper question: As machines become more capable of doing the labor of research and learning, what happens to higher education? What purpose does the university serve? Over the past eight years, weve been studying the moral implications of pervasive engagement with AI as part of a joint research project between the Applied Ethics Center at UMass Boston and the Institute for Ethics and Emerging Technologies. In a recent white paper, we argue that as AI systems become more autonomous, the ethical stakes of AI use in higher ed rise, as do its potential consequences. As these technologies become better at producing knowledge workdesigning classes, writing papers, suggesting experiments, and summarizing difficult textsthey dont just make universities more productive. They risk hollowing out the ecosystem of learning and mentorship upon which these institutions are built, and on which they depend. Nonautonomous AI Consider three kinds of AI systems and their respective impacts on university life: AI-powered software is already being used throughout higher education in admissions review, purchasing, academic advising, and institutional risk assessment. These are considered nonautonomous systems because they automate tasks, but a person is in the loop and using these systems as tools. These technologies can pose a risk to students privacy and data security. They also can be biased. And they often lack sufficient transparency to determine the sources of these problems. Who has access to student data? How are risk scores generated? How do we prevent systems from reproducing inequities or treating certain students as problems to be managed? These questions are serious, but they are not conceptually new, at least within the field of computer science. Universities typically have compliance offices, institutional review boards, and governance mechanisms that are designed to help address or mitigate these risks, even if they sometimes fall short of these objectives. Hybrid AI Hybrid systems encompass a range of tools, including AI-assisted tutoring chatbots, personalized feedback tools, and automated writing support. They often rely on generative AI technologies, especially large language models. While human users set the overall goals, the intermediate steps the system takes to meet them are often not specified. Hybrid systems are increasingly shaping day-to-day academic work. Students use them as writing companions, tutors, brainstorming partners, and on-demand explainers. Faculty use them to generate rubrics, draft lectures, and design syllabuses. Researchers use them to summarize papers, comment on drafts, design experiments, and generate code. This is where the cheating conversation belongs. With students and faculty alike increasingly leaning on technology for help, it is reasonable to wonder what kinds of learning might get lost along the way. But hybrid systems also raise more complex ethical questions. One has to do with transparency. AI chatbots offer natural-language interfaces that make it hard to tell when youre interacting with a human and when youre interacting with an automated agent. That can be alienating and distracting for those who interact with them. A student reviewing material for a test should be able to tell if they are talking with their teaching assistant or with a robot. A student reading feedback on a term paper needs to know whether it was written by their instructor. Anything less than complete transparency in such cases will be alienating to everyone involved and will shift the focus of academic interactions from learning to the means or the technology of learning. University of Pittsburgh researchers have shown that these dynamics bring forth feelings of uncertainty, anxiety, and distrust for students. These are problematic outcomes. A second ethical question relates to accountability and intellectual credit. If an instructor uses AI to draft an assignment and a student uses AI to draft a response, who is doing the evaluating, and what exactly is being evaluated? If feedback is partly machine-generated, who is responsible when it misleads, discourages, or embeds hidden assumptions? And when AI contributes substantially to research synthesis or writing, universities will need clearer norms around authorship and responsibilitynot only for students, but also for faculty. Finally, there is the critical question of cognitive offloading. AI can reduce drudgery, and thats not inherently bad. But it can also shift users away from the parts of learning that build competence, such as generating ideas, struggling through confusion, revising a clumsy draft, and learning to spot ones own mistakes. Autonomous agents The most consequential changes may come with systems that look less like assistants and more like agents. While truly autonomous technologies remain aspirational, the dream of a researcher in a boxan agentic AI system that can performstudies on its ownis becoming increasingly realistic. Agentic tools are anticipated to free up time for work that focuses on more human capacities like empathy and problem-solving. In teaching, this may mean that faculty may still teach in the headline sense, but more of the day-to-day labor of instruction can be handed off to systems optimized for efficiency and scale. Similarly, in research, the trajectory points toward systems that can increasingly automate the research cycle. In some domains, that already looks like robotic laboratories that run continuously, automate large portions of experimentation, and even select new tests based on prior results. At first glance, this may sound like a welcome boost to productivity. But universities are not information factories; they are systems of practice. They rely on a pipeline of graduate students and early-career academics who learn to teach and research by participating in that same work. If autonomous agents absorb more of the routine responsibilities that historically served as on-ramps into academic life, the university may keep producing courses and publications while quietly thinning the opportunity structures that sustain expertise over time. The same dynamic applies to undergraduates, albeit in a different register. When AI systems can supply explanations, drafts, solutions, and study plans on demand, the temptation is to offload the most challenging parts of learning. To the industry that is pushing AI into universities, it may seem as if this type of work is inefficient and that students will be better off letting a machine handle it. But it is the very nature of that struggle that builds durable understanding. Cognitive psychology has shown that students grow intellectually through doing the work of drafting, revising, failing, trying again, grappling with confusion, and revising weak arguments. This is the work of learning how to learn. Taken together, these developments suggest that the greatest risk posed by automation in higher education is not simply the replacement of particular tasks by machines, but the erosion of the broader ecosystem of practice that has long sustained teaching, research, and learning. An uncomfortable inflection point So what purpose do universities serve in a world in which knowledge work is increasingly automated? One possible answer treats the university primarily as an engine for producing credentials and knowledge. There, the core question is output: Are students graduating with degrees? Are papers and discoveries being generated? If autonomous systems can deliver those outputs more efficiently, then the institution has every reason to adopt them. But another answer treats the university as something more than an output machine, acknowledging that the value of higher education lies partly in the ecosystem itself. This model assigns intrinsic value to the pipeline of opportunities through which novices become experts, the mentorship structures through which judgment and responsibility are cultivated, and the educational design that encourages productive struggle rather than optimizing it away. Here, what matters is not only whether knowledge and degrees are produced, but how they are produced and what kinds of people, capacities, and communities are formed in the process. In this version, the university is meant to serve as no less than an ecosystem that reliably forms human expertise and judgment. In a world where knowledge work itself is increasingly automated, we think universities must ask what higher education owes its students, its early-career scholars, and the society it serves. The answers will determine not only how AI is adopted, but also what the modern university becomes. Nir Eisikovits is a professor of philosophy and the director of the Applied Ethics Center at UMass Boston. Jacob Burley is a junior research fellow at the Applied Ethics Center at UMass Boston. This article is republished from The Conversation under a Creative Commons license. Read the original article.


Category: E-Commerce

 

LATEST NEWS

2026-02-22 04:03:15| Fast Company

Ive been using ChatGPT and other AI tools recently for quite a few things. A few examples: Working on strategy and operations for my latest business venture, Life Story Magic. Planning how to get the most value out of the Epic ski pass I bought for the year, while balancing everything else. Putting together a stretching and DIY physical therapy plan to get my shoulders feeling better during gym workouts. Along the way, Ive done what I think a lot of AI power users eventually wind up doing: Ive gone into the personalization and settings and told the chatbot to be neutral, direct, and just-the-facts. I dont want a chatbot that tells me That is a brilliant idea! every time I explore a tweak to my business strategy. Theyre not all brilliant, I assure you. And I dont want a lecture about how if I truly have shoulder issues I should see a real physical therapist. Im an adult. Im not outsourcing my judgment to a robot. Stop. I didnt ask you that The result of all this is that Ive developed an alpha relationship with AI. I tell it what to do. If it goes on too long, if it assumes I agree with its suggestions, or starts padding its answers with unnecessary niceties, I shut it down. Stop. I didnt ask you that. No. Wrong. Listen to what Im saying before replying. All I need from you are the following three things. Nothing else. As ChatGPT itself repeatedly reminds me, it has no feelings. HereI even asked it to confirm while writing this article: I dont have feelings, and I cant be offended. You can be blunt, curt, or even rude to a chatbot and nothing is harmed.The awkwardness youre describing is entirely on the human side of the interaction. All good, right? Until I caught myself dealing with customer service. $800 worth of Warby Parker Recently, I was returning most of a large Warby Parker orderprobably close to $600 out of $800 that Id spent on glasses, spread across multiple orders placed on different days last month. I always try to remember that customer service workers are real people, often working on the opposite schedule so they can be available during American waking hours, dealing with one unhappy customer after another all day long. I keep that image in mind, so I remember that whatever small problem Im having probably isnt a big deal. I guess Im trying to be a decent human. I also avoid the remote possibility of becoming the star of some viral customer-service-gone-wrong video. 11 minutes of learning But this call dragged on: 11 minutes in all. Writing that now, it doesnt seem super long, but at the time it felt like an eternity for something that should have been simple. There was a noticeable delay on the line, and not the best connection, and the customer service rep interrupted me several times, assuming that he understood what I was asking and launching into long, off-topic explanations before I could finish. Reflexively, I started talking to him the same way I talk to ChatGPT: Stop. I didnt ask you that. No. Listen to what Im saying before replying. All I need from you are the following three things. Entire life stories To be fair, I caught myself pretty quickly. Also, I probably overcompensated for the rest of the call. In real life, its almost a cliché among people who know me that I talk with everyone and often walk away knowing their entire life story, simply because I find almost everyone interesting. My wife, sitting next to me, as I read this part aloud to her: Mmmm-hmmm. But in that moment, I had slipped into the mode I use with machines: efficient, blunt, and completely unconcerned with the other sides experience. Machines are not human; humans are Ive stripped empathy out of my interactions with AI on purpose. I think that makes sense. I want speed and clarity, not emotional intelligence. Also, Im uneasy with the idea of blurring the lines between humans and machines. But without thinking, I carried that same way of communicating into a conversation with a real, live, fellow human being. When you train yourself to communicate efficiently with something artificialsomething that never needs patience, kindness, or to be treated with dignity, its easy to forget that most of the world still does. And frankly, so do you. Bill Murphy Jr. This article originally appeared on Fast Companys sister site, Inc.com. Inc. is the voice of the American entrepreneur. We inspire, inform, and document the most fascinating people in business: the risk-takers, the innovators, and the ultra-driven go-getters that represent the most dynamic force in the American economy.


Category: E-Commerce

 

2026-02-22 03:59:50| Fast Company

Inc.com columnist Alison Green answers questions about workplace and management issueseverything from how to deal with a micromanaging boss to how to talk to someone on your team about body odor. Heres a roundup of answers to three questions from readers. 1. A new employee missed the fourth day of work, saying something came up I had a new employee start on a Tuesday. That Friday, I woke up to a text from my new hire from the night before, saying that she would not be in on Friday, that something had come up and she would see me on Monday. This is an in-person job in a corporate environment. I fully respect a persons right to take a sick day and I feel nobody is obligated to share personal details, but I also dont feel like something came up quite cuts it, especially on what would be your fourth day on the job. Im looking for some guidance on where to set my expectations (regardless of this person working out or not). Am I out of line to feel something came up is inadequate when calling out as a brand-new hire? Green responds: Youre not wrong! Something came up is strangely cavalier. Im sick or I have a family emergency (without giving details beyond that) would both be fine, but something came up sounds like it could be my sister called and I feel like talking to her or someone invited me to play tetherball. It also sounds like she doesnt think calling out on her fourth day of work is a big deal, when thats normally something people would really try to avoid unless they truly couldnt. Something came up might be fine from a longer-time employee who had a track record of reliability (although it would still be kind of weird), but its pretty alarming from someone in their first week. 2. Scheduling a Zoom call to reject a job candidate My friend has been applying for jobs and made it to the final round for one position. She didnt hear back on the timeline they had mentioned on the last interview, so she assumed they passed on her and moved on. But she got an email from them recently asking to schedule a Zoom the next day. Feels promising, right? Wrong. She hops on the Zoom (with video) and they immediately tell her, You are great, but we went with another candidate and they accepted. End of meeting. Is it appropriate to schedule a Zoom call just to reject someone? I feel like thats really overkill and sort of the equivalent of asking someone to come into the office just to reject them. At the most, I felt like this could have been a quick phone call instead of going through the rigamarole of being on video. I also felt like scheduling the Zoom gave her the impression they would be making a formal offer, so it was doubly painful to get rejected in this manner because she got her hopes up. Green responds: Yes, this is not good! Im sure they didnt intend it to be awful for her, but this takes all the problems with phone call rejections (the person gets their hopes up, and then has to respond graciously on the spot to what might be crushing disappointment) and adds a horrible video twist (the person probably took time beforehand to ensure they looked professional, maybe put on makeup, all to get a rejection that could have been delivered over email). When companies do this, they think theyre being courteous and respectful. She invested the time, the thinking goes, and we owe her the courtesy of a real conversation. Some candidates really do prefer rejections that way but so many people find it upsetting that its really better to stick to email. You can send a very gracious, personalized email rejection. You can even add a note that youd be happy to talk on the phone if the person would like feedback, if thats something youre willing to offer. But making someone get rejected face-to-face on video is not kind, no matter what the intentions. 3. How to tell my network about a job opening My company is trying to overcome some issues weve had in the past with hiring gaps too many people promoted from within into roles that needed more experience. Ive been asked to reach out to people Ive worked with previously whom I would recommend in this role. Its a public posting and Im happy to do that since so many people are un- or underemployed. But Im hung up on the awkwardness of it. Hi! We havent talked in literally five years, but I wondered if youd be interested in this job thats far below your skill set since its better than where you are now? Look at this posting, let me know if you or these other guys Im not in touch with but you are might be interested? Could you please suggest a better script for cold-calling a request to apply? Green responds: The easiest way to do it is to just say, Im trying to circulate the job posting to people who might be interested themselves or might know people who would be. (This is also the best way to do it when youre hoping the recipient themselves will apply, but you want plausible deniability with their manager that you didnt try to recruit them away, if there otherwise would be potentially awkward relationship ramifications.) And as for not having talked in five years: It doesnt really matter! Professional relationships dont have the same rules as social relationships. In a professional context, its perfectly fine to contact someone you havent talked to in years because you need a reference, think they might be interested in a job, or so forth. Its not considered rude just because you havent stayed in touch in the interim. Want to submit a question of your own? Send it to alison@askamanager.org. Alison Green This article originally appeared on Fast Companys sister site, Inc.com. Inc. is the voice of the American entrepreneur. We inspire, inform, and document the most fascinating people in business: the risk-takers, the innovators, and the ultra-driven go-getters that represent the most dynamic force in the American economy.


Category: E-Commerce

 

Latest from this category

22.02Why the greatest risk of AI in higher education is the erosion of learning
22.02I completely missed what ChatGPT was doing to meuntil an 11-minute phone call made it painfully obvious
22.02A new employee missed work on day 4, no reason given
21.02The secret to all those death-defying Olympic jumps is a giant plastic airbag
21.02Burger King wants you to call its president to complain. No, really
21.02How to prepare for a market crash
21.02Why using facial recognition on your phone could leave you vulnerable
21.02This Olympic skill can boost your job performance
E-Commerce »

All news

22.02Why the greatest risk of AI in higher education is the erosion of learning
22.02Key social issues identified in charity report
22.02M-cap of six of top 10 most valued firms climbs Rs 63,000 crore; L&T, SBI biggest gainers
22.02IPO frenzy returns!
22.02Tiny Titans
22.02Today's Headlines
22.02I completely missed what ChatGPT was doing to meuntil an 11-minute phone call made it painfully obvious
22.02A new employee missed work on day 4, no reason given
More »
Privacy policy . Copyright . Contact form .