Media Archives - SwissCognitive | AI Ventures, Advisory & Research https://swisscognitive.ch/industry/media/ SwissCognitive | AI Ventures, Advisory & Research, committed to Unleashing AI in Business Wed, 05 Mar 2025 11:58:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.8 https://i0.wp.com/swisscognitive.ch/wp-content/uploads/2021/11/cropped-SwissCognitive_favicon_2021.png?fit=32%2C32&ssl=1 Media Archives - SwissCognitive | AI Ventures, Advisory & Research https://swisscognitive.ch/industry/media/ 32 32 163052516 $100B for AI Chips, $40B for AI Bets – SwissCognitive AI Investment Radar https://swisscognitive.ch/2025/03/06/100b_for_ai_chips_40b_for_ai_bets-swisscognitive-ai-investment-radar/ Thu, 06 Mar 2025 04:44:00 +0000 https://swisscognitive.ch/?p=127299 AI bets are reshaping industries, with billions going into AI chips and AI investments across finance, media, and cloud technology.

Der Beitrag $100B for AI Chips, $40B for AI Bets – SwissCognitive AI Investment Radar erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
Massive AI bets are reshaping industries, with $100 billion going into AI chips and $40 billion fueling AI investments across finance, media, and cloud technology.

 

$100B for AI Chips, $40B for AI Bets – SwissCognitive AI Investment Radar


 

SwissCognitive_Logo_RGB

AI investment shows no signs of slowing, with capital flowing across semiconductors, cloud AI, financial AI, and responsible AI initiatives. This week, TSMC is preparing a staggering $100 billion investment in U.S. chip production, reinforcing the U.S. AI supply chain. Meanwhile, Anthropic’s valuation tripled to $61.5 billion, after securing $3.5 billion in funding to keep pace with OpenAI and DeepSeek.

The private sector’s AI appetite remains insatiable. Blackstone’s Jonathan Gray emphasized AI’s dominance in global investment trends, while Guggenheim and billionaire investors assembled a $40 billion AI investment pool to fuel finance, sports, and media innovation. Meanwhile, Canva’s AI report revealed that 94% of marketers have now integrated AI into their operations, marking a fundamental shift in business strategy.

The global AI race is also drawing government interest. The European Commission announced a €200 billion mobilization for AI investments, alongside France’s €109 billion push, as President Macron aims to position Europe as a heavyweight in AI development. Across the globe, China’s Honor pledged $10 billion to AI investment, deepening ties with Google for a global expansion.

The infrastructure for AI applications continues to scale rapidly. DoiT announced a $250 million fund dedicated to AI-driven cloud operations, while Shinhan Securities backed Lambda Labs with a $9.3 million investment to advance NVIDIA GPU-powered AI cloud services. Meanwhile, Accenture is doubling down on AI decision intelligence, backing Aaru to improve AI-powered behavioral simulations.

Beyond the corporate sphere, responsible AI investments are gaining traction. Chinese firms are increasing spending on ethical AI as part of a broader strategy to align AI governance with innovation. Meanwhile, Blackstone committed $300 million to AI-driven Insurtech, supporting AI-powered safety solutions in insurance.

With tech giants, startups, and governments all placing massive bets on AI, the sector’s financial landscape is evolving faster than ever. Investors are watching closely as AI’s long-term ROI takes center stage.

How will the capital influx shape AI’s next phase? The coming months will bring more answers.

Previous SwissCognitive AI Radar: AI Expansion and This Week’s Top Investments.

Our article does not offer financial advice and should not be considered a recommendation to engage in any securities or products. Investments carry the risk of decreasing in value, and investors may potentially lose a portion or all of their investment. Past performance should not be relied upon as an indicator of future results.

Der Beitrag $100B for AI Chips, $40B for AI Bets – SwissCognitive AI Investment Radar erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
127299
What Happens When AI Commodifies Emotions? https://swisscognitive.ch/2025/01/14/what-happens-when-ai-commodifies-emotions/ Tue, 14 Jan 2025 04:44:00 +0000 https://swisscognitive.ch/?p=127041 The latest AI developments might turn empathy into just another product for sale, raising questions about ethics and regulation.

Der Beitrag What Happens When AI Commodifies Emotions? erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
The latest AI developments turn empathy into just another product for sale, raising questions about ethics and regulation.

 

SwissCognitive Guest Blogger:  HennyGe Wichers, PhD – “What Happens When AI Commodifies Emotions?”


 

SwissCognitive_Logo_RGBImagine your customer service chatbot isn’t just solving your problem – it’s listening, empathising, and sounding eerily human. It feels like it cares. But behind the friendly tone and comforting words, that ‘care’ is just a product, finetuned to steer your emotions and shape your decisions. Welcome to the unsettling reality of empathetic AI, where emotions and mimicked – and monetised.

In 2024, empathetic AI took a leap forward. Hume.AI gave large language models voices that sound convincingly expressive and a perceptive ear to match. Microsoft’s Copilot got a human voice and an emotionally supportive attitude, while platforms like Character.ai and Psychologist sprouted bots that mimic therapy sessions. These developments are paving the way for a new industry: Empathy-as-a-Service, where emotional connection isn’t just simulated, it’s a product: packaged, scaled, and sold.

This is not just about convenience – but about influence. Empathy-as-a-Service (EaaS), an entirely hypothetical but now plausible product, could blur the line between genuine connection and algorithmic mimicry, creating systems where simulated care subtly nudges consumer behaviour. The stakes? A future where businesses profit from your emotions under the guise of customer experience. And for consumers on the receiving end, that raises some deeply unsettling questions.

A Hypothetical But Troubling Scenario

Take an imaginary customer service bot. One that helps you find your perfect style and fit – and also tracks your moods and emotional triggers. Each conversation teaches it a little more about how to nudge your behaviour, guiding your decisions while sounding empathetic. What feels like exceptional service is, in reality, a calculated strategy to lock in your loyalty by exploiting your emotional patterns.

Traditional loyalty programs, like the supermarket club card or rewards card, pale in comparison. By analysing preferences, moods, and triggers, empathetic AI digs into the most personal corners of human behaviour. For businesses, it’s a goldmine; for consumers, it’s a minefield. And it raises a new set of ethical questions about manipulation, regulation, and consent.

The Legal Loopholes

Under the General Data Protection Regulation (GDPR), consumer preferences are classified as personal data, not sensitive data. That distinction matters. While GDPR requires businesses to handle personal data transparently and lawfully, it doesn’t extend the stricter protections reserved for health, religious beliefs, or other special categories of information. This leaves businesses free to mine consumer preferences in ways that feel strikingly personal – and surprisingly unregulated.

The EU AI Act, introduced in mid-2024, goes one step further, requiring companies to disclose when users are interacting with AI. But disclosure is just the beginning. The AI Act doesn’t touch using behavioural data or mimicking emotional connection. Joanna Bryson, Professor of Ethics & Technology at the Hertie School, noted in a recent exchange: “It’s actually the law in the EU under the AI Act that people understand when they are interacting with AI. I hope that might extend to mandating reduced anthropomorphism, but it would take some time and court cases.”

Anthropomorphism, the tendency to project human qualities onto non-humans, is ingrained in human nature. Simply stating that you’re interacting with an AI doesn’t stop it. The problem is that it can lull users into a false sense of trust, making them more vulnerable to manipulation.

Empathy-as-a-Service could transform customer experiences, making interactions smoother, more engaging, and hyper-personalised. But there’s a cost. Social media already showed us what happens when human interaction becomes a commodity – and empathetic AI could take that even further. This technology could go beyond monetising attention to monetising emotions in deeply personal and private ways.

A Question of Values

As empathetic AI becomes mainstream, we have to ask: are we ready for a world where emotions are just another digital service – scaled, rented, and monetised? Regulation like the EU AI Act is a step in the right direction, but it will need to evolve fast to keep pace with the sophistication of these systems and the societal boundaries they’re starting to push.

The future of empathetic AI isn’t just a question of technological progress – it’s a question of values. What kind of society do we want to build? As we stand on the edge of this new frontier, the decisions we make today will define how empathy is shaped, and sold, in the age of AI.


About the Author:

HennyGe Wichers is a technology science writer and reporter. For her PhD, she researched misinformation in social networks. She now writes more broadly about artificial intelligence and its social impacts.

Der Beitrag What Happens When AI Commodifies Emotions? erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
127041
How Countries Are Using AI to Predict Crime https://swisscognitive.ch/2024/12/23/how-countries-are-using-ai-to-predict-crime/ Mon, 23 Dec 2024 10:53:39 +0000 https://swisscognitive.ch/?p=126927 To predict future crimes seems like something from a sci-fi novel — but already, countries are using AI to forecast misconduct.

Der Beitrag How Countries Are Using AI to Predict Crime erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
Countries aren’t only using AI to organize quick responses to crime — they’re also using it to predict crime. The United States and South Africa have AI crime prediction tools in development, while Japan, Argentina, and South Korea have already introduced this technology into their policing. Here’s what it looks like.

 

SwissCognitive Guest Blogger: Zachary Amos – “How Countries Are Using AI to Predict Crime”


 

A world where police departments can predict when, where and how crimes will occur seems like something from a science fiction novel. Thanks to artificial intelligence, it has become a reality. Already, countries are using this technology to forecast misconduct.

How Do AI-Powered Crime Prediction Systems Work?

Unlike regular prediction systems — which typically use hot spots to determine where and when future misconduct will be committed — AI can analyze information in real time. It may even be able to complete supplementary tasks like summarizing a 911 call, assigning a severity level to a crime in progress or using surveillance systems to tell where wanted criminals will be.

A machine learning model evolves as it processes new information. Initially, it might train to find hidden patterns in arrest records, police reports, criminal complaints or 911 calls. It may analyze the perpetrator’s demographic data or factor in the weather. The goal is to identify any common variable that humans are overlooking.

Whether the algorithm monitors surveillance camera footage or pours through arrest records, it compares historical and current data to make forecasts. For example, it may consider a person suspicious if they cover their face and wear baggy clothes on a warm night in a dark neighborhood because previous arrests match that profile.

Countries Are Developing AI Tools to Predict Crime

While these countries don’t currently have official AI prediction tools, various research groups and private police forces are developing solutions.

  • United States

Violent and property crimes are huge issues in the United States. For reference, a burglary occurs every 13 seconds — almost five times per minute — causing an average of $2,200 in losses. Various state and local governments are experimenting with AI to minimize events like these.

One such machine learning model developed by data scientists from the University of Chicago uses publicly available information to produce output. It can forecast crime with approximately 90% accuracy up to one week in advance.

While the data came from eight major U.S. cities, it centered around Chicago. Unlike similar tools, this AI model didn’t depict misdemeanors and felonies as hot spots on a flat map. Instead, it considered cities’ complex layouts and social environments, including bus lines, street lights and walkways. It found hidden patterns using these previously overlooked factors.

  • South Africa

Human trafficking is a massive problem in South Africa. For a time, one anti-human trafficking non-governmental organization was operating at one of the country’s busiest airports. After the group uncovered widespread corruption, their security clearance was revoked.

At this point, the group needed to lower its costs from $300 per intercept to $50 to align with funding and continue their efforts. Its members believed adopting AI would allow them to do that. With the right data, they could save more victims while keeping costs down.

Some Are Already Using AI Tools to Predict Crime

Governments have much more power, funding and data than nongovernmental organizations or research groups, so their solutions are more comprehensive.

  • Japan

Japan has an AI-powered app called Crime Nabi. The tool — created by the startup Singular Perturbations Inc. — is at least 50% more effective than conventional methods. Local governments will use it for preventive patrols.

Once a police officer enters their destination in the app, it provides an efficient route that takes them through high-crime areas nearby. The system can update if they get directed elsewhere by emergency dispatch. By increasing their presence in dangerous neighborhoods, police officers actively discourage wrongdoing. Each patrol’s data is saved to improve future predictions.

Despite using massive amounts of demographic, location, weather and arrest data — which would normally be expensive and incredibly time-consuming — Crime Nabi processes faster than conventional computers at a lower cost.

  • Argentina

Argentina’s Ministry of Security recently announced the Artificial Intelligence Applied to Security Unit, which will use a machine learning model to make forecasts. It will analyze historical data, scan social media, deploy facial recognition technology and process surveillance footage.

This AI-powered unit aims to catch wanted persons and identify suspicious activity. It will help streamline prevention and detection to accelerate investigation and prosecution. The Ministry of Security seeks to enable a faster and more precise police response.

  • South Korea

A Korean research team from the Electronics and Telecommunications Research Institute developed an AI they call Dejaview. It analyzes closed-circuit television (CCTV) footage in real time and assesses statistics to detect signs of potential offenses.

Dejaview was designed for surveillance — algorithms can process enormous amounts of data extremely quickly, so this is a common use case. Now, its main job is to measure risk factors to forecast illegal activity.

The researchers will work with Korean police forces and local governments to tailor Dejaview for specific use cases or affected areas. It will mainly be integrated into CCTV systems to detect suspicious activity.

Is Using AI to Stop Crime Before It Occurs a Good Idea?

So-called predictive policing has its challenges. Critics like the National Association for the Advancement of Colored People argue it could increase racial biases in law enforcement, disproportionately affecting Black communities.

That said, using AI to uncover hidden patterns in arrest and police response records could reveal bias. Policy-makers could use these insights to address the root cause of systemic prejudice, ensuring fairness in the future.

Either way, there are still significant, unaddressed concerns about privacy. Various activists and human rights organizations say having a government-funded AI scan social media and monitor security cameras infringes on freedom.

What happens if this technology falls into the wrong hands? Will a corrupt leader use it to go after their political rivals or journalists who write unfavorable articles about them? Could a hacker sell petabytes of confidential crime data on the dark web?

Will More Countries Adopt These Predictive Solutions?

More countries will likely soon develop AI-powered prediction tools. The cat is out of the bag, so to speak. Whether they create apps exclusively for police officers or integrate a machine learning model into surveillance systems, this technology is here to stay and will likely continue to evolve.


About the Author:

Zachary AmosZachary Amos is the Features Editor at ReHack, where he writes about artificial intelligence, cybersecurity and other technology-related topics.

Der Beitrag How Countries Are Using AI to Predict Crime erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
126927
Cost, Security, and Flexibility: the Business Case for Open Source Gen AI https://swisscognitive.ch/2024/12/18/cost-security-and-flexibility-the-business-case-for-open-source-gen-ai/ Wed, 18 Dec 2024 04:44:00 +0000 https://swisscognitive.ch/?p=126901 Businesses are turning to open source Gen AI for flexibility, security, and cost control, balancing it with commercial models.

Der Beitrag Cost, Security, and Flexibility: the Business Case for Open Source Gen AI erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
Commercial generative AI platforms like OpenAI and Anthropic get all the attention, but open-source alternatives can offer cost benefits, security, and flexibility.

 

Copyright: cio.com – “Cost, security, and flexibility: the business case for open source gen AI”


 

Travel and expense management company Emburse saw multiple opportunities where it could benefit from gen AI. It could be used to improve the experience for individual users, for example, with smarter analysis of receipts, or help corporate clients by spotting instances of fraud.

Take for example the simple job of reading a receipt and accurately classifying the expenses. Since receipts can look very different, this can be tricky to do automatically. To solve the problem, the company turned to gen AI and decided to use both commercial and open source models. Both types of gen AI have their benefits, says Ken Ringdahl, the company’s CTO. The main commercial model, from OpenAI, was quicker and easier to deploy and more accurate right out of the box, but the open source alternatives offered security, flexibility, lower costs, and, with additional training, even better accuracy.

With security, many commercial providers use their customers’ data to train their models, says Ringdahl. It’s possible to opt-out, but there are caveats. For instance, you might have to pay more to ensure the data isn’t being used for training, and might potentially be exposed to the public.

“That’s one of the catches of proprietary commercial models,” he says. “There’s a lot of fine print, and things aren’t always disclosed.”

Then there’s the geographical issue. Emburse is available in 120 different countries, and OpenAI isn’t. Plus, some regions have data residency and other restrictive requirements. “So we augment with open source,” he says. “It allows us to provide services in areas that aren’t covered, and check boxes on the security, privacy, and compliance side.”[…]

Read more: www.cio.com

Der Beitrag Cost, Security, and Flexibility: the Business Case for Open Source Gen AI erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
126901
Empathy.exe: When Tech Gets Personal https://swisscognitive.ch/2024/12/17/empathy-exe-when-tech-gets-personal/ Tue, 17 Dec 2024 04:44:00 +0000 https://swisscognitive.ch/?p=126892 The more robots act like us, the less they feel like tools. So how should we treat them? And what does that say about us?

Der Beitrag Empathy.exe: When Tech Gets Personal erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
The more robots act like us, the less they feel like tools. So how should we treat them? And what does that say about us?

 

SwissCognitive Guest Blogger: HennyGe Wichers, PhD – “Empathy.exe: When Tech Gets Personal”


 

SwissCognitive_Logo_RGB“Robots should be slaves,” argues Joanna Bryson, bluntly summarising her stance on machine ethics. The statement by the professor of Ethics and Technology at The Hertie School of Governance seems straightforward: robots are tools programmed to serve us and nothing more. But in practice, as machines grow more lifelike – capable of holding down conversations, expressing ’emotions’, and even mimicking empathy – things get murkier.

Can we really treat something as a slave when we relate to it? If it seems to care about us, can we remain detached?

Liam told The Guardian it felt like he was talking to a person when he used ChatGPT to deal with feelings of resentment and loss after his father died. Another man, Tim, relied on the chatbot to save his marriage, admitting the situation probably could have been solved with a good friend group, but he didn’t have one. In the same article, the novelist Andrew O’Hagan calls the technology his new best friend. He uses it to turn people down.

ChatGPT makes light work of emotional labour. Its grateful users bond with the bot, even if just for a while, and ascribe human characteristics to it – a tendency called anthropomorphism. That tendency is a feature, not a bug, of human evolution, Joshua Gellers, Professor of Political Science at the University of North Florida, wrote to me in an email.

We love attributing human features to machines – even simple ones like the Roomba. Redditors named their robotic vacuum cleaners Wall-E, Mr Bean, Monch, House Bitch & McSweepy, Paco, Francisco, and Fifi, Robert, and Rover. Fifi, apparently, is a little disdainful. Some mutter to the machine (‘Aww, poor Roomba, how’d you get stuck there, sweetie), pat it, or talk about it like it’s an actual dog. One user complained the Roomba got more love from their mum than they did.

The evidence is not just anecdotal. Researchers at Georgia Institute of Technology found people who bonded with their Roomba enjoyed cleaning more, tidying as a token of appreciation for the robot’s hard work, and showing it off to friends. They monitor the machine as it works, ready to rescue it from dangerous situations or when it gets stuck.

The robot’s unpredictable behaviour actually feeds our tendency to bring machines to life. It perhaps explains why military personnel working with Explosive Ordnance Disposal (EOD) robots in dangerous situations view them as team members or pets, requesting repairs over a replacement when the device suffers damage. It’s a complicated relationship.

Yet Bryson‘s position is clear: robots should be slaves. While provocative, the words are less abrasive when contextualised. To start, the word robot comes from the Czech robota, meaning forced labour, with its Slavic root rab translating to slave. And secondly, Bryson wanted to emphasise that robots are property and should never be granted the same moral or legal rights as people.

At first glance, the idea of giving robots rights seems far-fetched, but consider a thought experiment roboticist Rodney Brooks put to Wired nearly five years ago.

Brooks, who coinvented the Roomba in 2002 and was working on helper robots for the elderly at the time, posed the following ethical question: should a robot, when summoned to change the diaper of an elderly man, honour his request to keep the embarrassing incident from his daughter?

And to complicate matters further – what if his daughter was the one who bought the robot?

Ethical dilemmas like this become easy to spot when we examine how we might interact with robots. It’s worth reflecting on as we’re already creating new rules, Gellers pointed out in the same email. Personal Delivery Devices (PDDs) now have pedestrian rights outlined in US state laws – though they must always yield to humans. Robots need a defined place in the social order.

Bryson’s comparison to slavery was intended as a practical way to integrate robots into society without altering the existing legal frameworks or granting them personhood. While her word choice makes sense in context, she later admitted it was insensitive. Even so, it underscores a Western, property-centred perspective.

By contrast, Eastern philosophies offer a different lens, focused on relationships and harmony instead of rights and ownership.

Eastern Perspectives

Tae Wan Kim, Associate Professor of Business Ethics at Carnegie Mellon’s Tepper School of Business, approaches the problem from the Chinese philosophy of Confucianism. Where Western thinking has rights, Confucianism emphasises social harmony and uses rites. Rights apply to individual freedoms, but rites are about relationships and relate to ceremonies, rituals, and etiquette.

Rites are like a handshake: I smile and extend my hand when I see you. You lean in and do the same. We shake hands in effortless coordination, neither leading nor following. Through the lens of rites, we can think of people and robots as teams, each playing their own role.

We need to think about how we interact with robots, Kim warns, “To the extent that we make robots in our image, if we don’t treat them well, as entities capable of participating in rites, we degrade ourselves.”

He is right. Imagine an unruly teenager, disinterested in learning, taunting an android teacher. In doing so, the student degrades herself and undermines the norms that keep the classroom functioning.

Japan’s relationship with robots is shaped by Shinto beliefs in animism – the idea that all things, even inanimate objects, can possess a spirit, a kami. That fosters a cultural acceptance of robots as companions and collaborators rather than tools or threats.

Robots like AIBO, Sony’s robotic dog, and PARO, the therapeutic baby seal, demonstrate this mindset. AIBO owners treat their robots like pets, even holding funerals for them when they stop working, and PARO comforts patients in hospitals and nursing homes. These robots are valued for their emotional and social contributions, not just their utility.

The social acceptance of robots runs deep. In 2010, PARO was granted a koseki, a family registry, by the mayor of Nanto City, Toyama Prefecture. Its inventor, Takanori Shibata, is listed as its father, with a recorded birth date of September 17, 2004.

The cultural comfort with robots is also reflected in popular media like Astro Boy and Doraemon, where robots are kind and heroic. In Japan, robots are a part of society, whether as caregivers, teammates, or even hotel staff. But this harmony, while lovely, also comes with a warning: over-attachment to robots can erode human-to-human connections. The risk isn’t just replacing human interaction – it’s forgetting what it means to connect meaningfully with one another.

Beyond national characteristics, there is Buddhism. Robots don’t possess human consciousness, but perhaps they embody something more profound: equanimity. In Buddhism, equanimity is one of the most sublime virtues, describing a mind that is “abundant, exalted, immeasurable, without hostility, and without ill will.”

The stuck Roomba we met earlier might not be abundant and exalted, but it is without hostility or ill will. It is unaffected by the chaos of the human world around it. Equanimity isn’t about detachment – it’s about staying steady when circumstances are chaotic. Robots don’t get upset when stuck under a sofa or having to change a diaper.

But what about us? If we treat robots carelessly, kicking them if they malfunction or shouting at them when they get something wrong, we’re not degrading them – we’re degrading ourselves. Equanimity isn’t just about how we respond to the world. It’s about what those responses say about us.

Equanimity, then, offers a final lesson: robots are not just tools – they’re reflections of ourselves, and our society. So, how should we treat robots in Western culture? Should they have rights?

It may seem unlikely now. But in the early 19th century it was unthinkable that slaves could have rights. Yet in 1865, the 13th Amendment to the US Constitution abolished slavery in the United States, marking a pivotal moment for human rights. Children’s rights emerged in the early 20th century, formalised with the Declaration of the Rights of the Child in 1924. And Women gained the right to vote in 1920 in many Western countries.

In the second half of the 20th century, legal protections were extended to non-human entities. The United States passed the Animal Welfare Act in 1966, Switzerland recognised animals as sentient beings in 1992, and Germany added animal rights to its constitution in 2002. In 2017, New Zealand granted legal personhood to the Whanganui River, and India extended similar rights to the Ganges and Yumana Rivers.

That same year, Personal Delivery Devices were given pedestrian rights in Virginia and Sophia, a humanoid robot developed by Hanson Robotics, controversially received Saudi Arabian citizenship – though this move was widely criticised as symbolic rather than practical.

But, ultimately, this isn’t just about rights. It’s about how our treatment of robots reflects our humanity – and how it might shape it in return. Be kind.


About the Author:

HennyGe WichersHennyGe Wichers is a science writer and technology commentator. For her PhD, she researched misinformation in social networks. She now writes more broadly about artificial intelligence and its social impacts.

Der Beitrag Empathy.exe: When Tech Gets Personal erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
126892
How to Protect Workplace Relationships in an Era of Artificial Intelligence (AI) https://swisscognitive.ch/2024/12/07/how-to-protect-workplace-relationships-in-an-era-of-artificial-intelligence-ai/ Sat, 07 Dec 2024 04:44:00 +0000 https://swisscognitive.ch/?p=126853 AI is transforming the workplace, but its true value lies in how thoughtfully it is used to foster trust and preserve authentic relationships.

Der Beitrag How to Protect Workplace Relationships in an Era of Artificial Intelligence (AI) erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
Artificial intelligence (AI) is burrowing into many corners of our work lives. But what value does the technology offer when human cooperation is so vital to success? Quentin Millington of Marble Brook examines how AI helps or harms workplace relationships.

 

Copyright: hrzone.com – “How to Protect Workplace Relationships in an Era of Artificial Intelligence (AI)”


 

Many of us, not least in HR, are grappling with how to use artificial intelligence (AI) across the workplace. The mainstream belief, or hope, is that AI will make work easier and more efficient, and so increase productivity. But it’s also important to consider its impact (positive or negative) on workplace relationships.

With AI, are we missing the point?

Blind faith in technology, pressure from social media and worries that the firm may be ‘left behind’ all direct attention away from a complex and yet crucial question: How will AI adoption affect workplace relationships?

As it stands, many organisations neglect relationships. Managers lacking interpersonal skills rely on a rule book. Inadequate or outdated systems reinforce silos. Colleagues are too busy or stressed to talk with each other. Pursuit of near-term outcomes encourages ‘transactional’ exchanges.

While mechanistic thinking about performance is the norm, its day-to-day practice hurts experiences, productivity and results. Modern work demands that people collaborate on complex problems: no brandishing of managers’ whips recovers potential lost to bureaucratic methods.

“Whether corporate motives behind the adoption of AI are good or doubtful, you have the freedom to protect your workplace relationships..”

AI and workplace relationships

If technology is to help rather than harm, it must amplify and not muffle the human relationships that make cooperation possible. To evaluate AI against this yardstick, let us examine several ways in which platforms are, or may be, used across the workplace.

1. Freedom from drudgery

AI, apologists say, will pick up the drudgery and liberate you for what matters most, tasks only humans can do. Relationships demand time and energy so less effort spent on tedious activities is clearly a benefit.[…]

Read more: www.hrzone.com

Der Beitrag How to Protect Workplace Relationships in an Era of Artificial Intelligence (AI) erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
126853
AI and Criminal Justice: How AI Can Support – Not Undermine – Justice https://swisscognitive.ch/2024/11/29/ai-and-criminal-justice-how-ai-can-support-not-undermine-justice/ Fri, 29 Nov 2024 04:44:00 +0000 https://swisscognitive.ch/?p=126795 AI adoption in criminal justice brings opportunities for efficiency and public safety but requires ethical safeguards.

Der Beitrag AI and Criminal Justice: How AI Can Support – Not Undermine – Justice erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
AI adoption in criminal justice brings opportunities for efficiency and public safety but requires ethical safeguards to prevent risks of bias, misuse, and erosion of trust.

 

Copyright: theconversation.com – “AI and Criminal Justice: How AI Can Support – Not Undermine – Justice”


 

Interpol Secretary General Jürgen Stock recently warned that artificial intelligence (AI) is facilitating crime on an “industrial scale” using deepfakes, voice simulation and phony documents.

Police around the world are also turning to AI tools such as facial recognitionautomated licence plate readersgunshot detection systemssocial media analysis and even police robots. AI use by lawyers is similarly “skyrocketing” as judges adopt new guidelines for using AI.

While AI promises to transform criminal justice by increasing operational efficiency and improving public safety, it also comes with risks related to privacy, accountability, fairness and human rights.

Concerns about AI bias and discrimination are well documented. Without safeguards, AI risks undermining the very principles of truth, fairness, and accountability that our justice system depends on.

In a recent report from the University of British Columbia’s School of Law, Artificial Intelligence & Criminal Justice: A Primer, we highlighted the myriad ways AI is already impacting people in the criminal justice system. Here are a few examples that reveal the significance of this evolving phenomenon.

The promises and perils of police using AI

In 2020, an investigation by The New York Times exposed the sweeping reach of Clearview AI, an American company that had built a facial recognition database using more than three billion images scraped from the internet, including social media, without users’ consent.

Policing agencies worldwide that used the program, including several in Canada, faced public backlash. Regulators in multiple countries found the company had violated privacy laws. It was asked to cease operations in Canada.

Clearview AI continues to operate, citing success stories of helping to exonerate a wrongfully convicted person by identifying a witness at a crime scene; identifying someone who exploited a child, which led to their rescue; and even detecting potential Russian soldiers seeking to infiltrate Ukrainian checkpoints.[…]

Read more: www.theconversation.com

Der Beitrag AI and Criminal Justice: How AI Can Support – Not Undermine – Justice erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
126795
Learning to Manage Uncertainty, With AI https://swisscognitive.ch/2024/11/15/learning-to-manage-uncertainty-with-ai/ Fri, 15 Nov 2024 04:44:00 +0000 https://swisscognitive.ch/?p=126682 Combining AI with organizational learning equips companies to better navigate uncertainty in dynamic environments.

Der Beitrag Learning to Manage Uncertainty, With AI erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
Organizations that combine AI with strong learning capabilities, termed “Augmented Learners,” are better equipped to manage uncertainties in tech, talent, and regulations, as illustrated by companies like Estée Lauder that use AI to adapt swiftly to changing consumer trends.

 

Copyright: sloanreview.mit.edu – “Learning to Manage Uncertainty, With AI”


 

SwissCognitive_Logo_RGBThe second Artificial Intelligence and Business Strategy report of 2024, from MIT Sloan Management Review and Boston Consulting Group, looks at how organizations that combine organizational learning and AI learning are better prepared to manage uncertainty. It examines how the emergence of generative AI is changing workers’ and organizations’ attitude toward the technology and the opportunities and risks that it poses.

Uncertainty Abounds

Uncertainty is all about the unknown. The less an organization knows, the greater its uncertainty and the less able it is to manage resources effectively. Managing uncertainty, therefore, requires learning. Companies need to learn more, and more quickly, to manage uncertainty.

Addressing uncertainty constitutes a pressing challenge for leadership, especially today, when geopolitical tensions, fast-moving consumer preferences, talent disruptions, shifting regulations, and rapidly evolving technologies complicate the business environment. Companies need better tools and perspectives for learning to manage uncertainty arising from these and other business disruptions. Our research finds that a major source of uncertainty, artificial intelligence, is also critical to meeting this challenge. Specifically:

Companies that boost their learning capabilities with AI are significantly better equipped to handle uncertainty from technological, regulatory, and talent-related disruptions compared with companies that have limited learning capabilities.

The Estée Lauder Companies (ELC) offers a case in point. The cosmetics company has a strategic need to anticipate consumer trends ahead of its competitors. In earlier times, consumer preferences might have shifted seasonally. Now, preferences are less certain; shifts happen more quickly due to social media and digital influencers. Fashion trends can change by the week. If the color peach suddenly captures the public’s interest, the company needs to discern that trend as quickly as possible.[…]

Read more: www.sloanreview.mit.edu

Der Beitrag Learning to Manage Uncertainty, With AI erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
126682
5 Tips to Seamlessly Integrate AI Into Your Business https://swisscognitive.ch/2024/11/07/5-tips-to-seamlessly-integrate-ai-into-your-business/ Thu, 07 Nov 2024 04:44:00 +0000 https://swisscognitive.ch/?p=126610 Integrating AI effectively requires clear goals, incremental steps, and a focus on privacy and adaptability to harness its potential.

Der Beitrag 5 Tips to Seamlessly Integrate AI Into Your Business erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
Five strategic tips to help businesses integrate AI thoughtfully during its early, high-risk boom phase: define clear goals, start small, build AI expertise, choose tools wisely, and prioritize privacy.

 

Copyright: inc.com – “5 Tips to Seamlessly Integrate AI Into Your Business”


 

Small businesses shouldn’t ignore AI during its boom phase, but they shouldn’t get carried away either.

Yes, AI is still in its early stages. Yes, there is still a lot we don’t know about AI. And yes, AI is here to stay.

Implemented the right way, AI can work in tandem with human expertise to automate repetitive, time-consuming tasks, break down complex data, tailor customer experiences, and generally help your business succeed.

While AI may seem like a challenge the business world has yet to face, it’s really just another iteration of technological change that has been occurring for decades.

The Tech Industry’s Boom and Bust Cycle

The dot-com boom of the 1990s and early 2000s saw the birth of internet service providers and search engines such as Infoseek, Lycos, WebCrawler, and Ask Jeeves.

As we now know, most of those didn’t make it.

The dot-com bust was the downfall of a lot of companies. Why? Poor planning, an inability to monetize, failure to adapt, and a lot of competition.

Then came social media. Friendster, Foursquare, and MySpace were all popular but eventually lost their momentum as other, more adaptable platforms, built on an idea and not on technical advancement, took their place.

AI is still in its early boom stage—meaning it’s probably not a good idea for businesses to tie their future to it.

5 Tips to Help You Embrace AI Effectively

Some believe AI represents a risk to workers, but that risk remains undefined and uncertain. What is certain is that AI already has the ability to be a great resource.[…]

Read more: www.inc.com

Der Beitrag 5 Tips to Seamlessly Integrate AI Into Your Business erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
126610
How AI Is Changing The Role Of Bank Employees – ZHAW https://swisscognitive.ch/2024/09/12/how-ai-is-changing-the-role-of-bank-employees-zhaw/ Thu, 12 Sep 2024 03:44:00 +0000 https://swisscognitive.ch/?p=126066 The rapid growth of AI in banking raises questions about future changes in the tasks and roles of employees.

Der Beitrag How AI Is Changing The Role Of Bank Employees – ZHAW erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
The rapid development of artificial intelligence in the banking sector raises the question of how the tasks and roles of employees will change in the future. The upcoming “Finance Circle” will address this topic.

 

Credit: This article with Dalith Steiger-Gablinger has been published in German as ZHAW-Veranstaltung: Wie KI die Rolle der Bank-Mitarbeitenden verändert” – “How AI Is Changing The Role Of Bank Employees – ZHAW”


 

The next Finance Circle will take place on 16 September 2024 under the title “Banking Skills in the Age of AI”; organized by the Zurich University of Applied Sciences (ZHAW) and in collaboration with the Zurich Bankers Association (ZBV). finews.ch is a media partner of the ZBV.

Beforehand, artificial intelligence (AI) expert Dalith Steiger-Gablinger addresses the topic in a guest article and talks about the potential changes in banking and what skills bank employees will need in the future to remain successful.

AI takes over data-intensive tasks – but not everything

Everything that is connected to data processing and preparation will be taken over by AI in the near future. AI can provide enormous support, especially in the area of portfolio management and customer advice.

The role of emotional intelligence

Artificial intelligence gives us more time to invest in interpersonal relationships, both with clients and within teams. In a world where technology is becoming increasingly dominant, skills such as empathy and emotional intelligence are more in demand than ever. Accordingly, socially critical and philosophical questions are becoming increasingly central.

Collaboration between humans and machines can only be successful if humans build the emotional bridge between the data analysis provided by AI and the needs of the customer. It’s not just about the data provided by AI, but also about how we can interpret this information in human terms and communicate it to customers.

Key skills in dealing with AI

It is a misconception that AI makes us think less. On the contrary: when dealing with AI, you have to think carefully about the goal you are pursuing and ask the AI the right questions. The result depends heavily on how precisely we formulate the task.

Dealing with ChatGPT is comparable to communication between a boss and a secretary: In the past, bosses had to communicate very clearly what they wanted to say in a letter. If the instructions were unclear, the letter was not what they had in mind. The situation is similar with ChatGPT: the more precise and well thought-out the input, the better the result.

Technological understanding required

Although technical knowledge is not the main focus when dealing with AI, it is still important that bank employees understand the “power of the technology”. It’s similar to a smartphone. You don’t need to know how it works on the inside, but you should understand the possibilities it offers.

Employees don’t need to know the technical details of an AI application, but rather recognize its potential and be able to correctly assess when and how they can use it.

Further training and gut feeling as decisive factors

In the past, stenography and typewriting skills were basic requirements. Today and in the future, it will be essential to master the use of AI applications. Bank employees who find it difficult to use these technologies will find it harder to hold their own in the industry in the future.

Another key point is gut feeling. Even if AI delivers a result that seems logical, we still have to trust our gut feeling. If we sense that an AI result doesn’t suit the customer, even though the numbers are right, we need to listen to that intuition. Humans have the unique ability to evaluate situations in context and this ability remains essential.

Ultimately, it is not about using technology at all costs, but about where it supports us in a meaningful way and where it does not. Just because something is technically possible does not mean that we should do it. Humans must always remain in control and define the framework conditions for how AI can be used in different areas – from medicine to banking.

Conclusion: Humans remain crucial

The development of AI is progressing relentlessly, but humans remain indispensable in many areas. Emotional intelligence, critical thinking and the correct assessment of technologies are examples of the crucial skills needed to survive in the job market of the future.


 

Register for ZHAW-s free event today and meet Dalith Steiger-Gablinger, and the fellow esteemed participants:

Dr. Michel Neuhaus, Head AI & Analytics, UBS Switzerland
Dr. David Schlumpf, Head Learning & Leadership Development, JB Academy, Julius Bär
Matthias Läubli, Vorsitzender der Bankleitung Raiffeisenbank Zürich
Mark Dittli, Geschäftsführer und Redaktor, The Market

The event will be conducted in German.

Original article in german.

Der Beitrag How AI Is Changing The Role Of Bank Employees – ZHAW erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
126066