Chatbot Archives - SwissCognitive | AI Ventures, Advisory & Research https://swisscognitive.ch/technology/chatbot/ SwissCognitive | AI Ventures, Advisory & Research, committed to Unleashing AI in Business Tue, 22 Apr 2025 12:36:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.8 https://i0.wp.com/swisscognitive.ch/wp-content/uploads/2021/11/cropped-SwissCognitive_favicon_2021.png?fit=32%2C32&ssl=1 Chatbot Archives - SwissCognitive | AI Ventures, Advisory & Research https://swisscognitive.ch/technology/chatbot/ 32 32 163052516 Leveraging AI to Predict and Reduce College Dropout Rates https://swisscognitive.ch/2025/04/22/leveraging-ai-to-predict-and-reduce-college-dropout-rates/ https://swisscognitive.ch/2025/04/22/leveraging-ai-to-predict-and-reduce-college-dropout-rates/#respond Tue, 22 Apr 2025 03:44:00 +0000 https://swisscognitive.ch/?p=127412 Dropping out of college can limit students’ opportunities and is difficult for schools to predict. Here’s how AI can help.

Der Beitrag Leveraging AI to Predict and Reduce College Dropout Rates erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
Responsible AI use can help universities ensure every student gets the help they need, resulting in falling dropout rates. Schools will benefit from the higher student success rate, and the student body will benefit by achieving goals that will help them in their future careers. Here’s how to apply AI to student retention.

 

SwissCognitive Guest Blogger: Zachary Amos – “Leveraging AI to Predict and Reduce College Dropout Rates”


 

Artificial intelligence (AI) is already impacting education in many ways. Some schools are embracing it to serve students better, and many learners use it to help them with research and assignments. One of its more promising uses in this field, though, is reducing dropout rates.

Dropping out of college before finishing a degree may limit students’ opportunities in the future, but it can also be difficult for schools to predict. AI can help all parties involved through several means.

Identifying At-Risk Students

Preventing dropouts starts with recognizing which people are at risk of quitting prematurely. Machine learning is an optimal solution here because it excels at identifying patterns in vast amounts of data. Many factors can lead to dropping out, and each can be difficult to see, but AI can spot these developments before it’s too late.

Studies show early interventions based on warning signs can significantly reduce dropout rates, and AI enables such action. Educators can only intervene when they know it’s necessary to do so, and that level of insight is precisely what AI can provide.

Early examples of this technology have already achieved 96% accuracy in predicting students at risk of dropping out. Combining such predictions with a formal intervention plan could let higher ed facilities ensure more students finish their degrees.

Uncovering Non-Academic Risk Factors

In addition to recognizing known predictors of dropout risks, AI can uncover subtler, non-academic indicators. The causes of dropping out are not always easy to see in classroom performance. For example, over 60% of college students experience at least one mental health issue, which can threaten their education. AI can reveal these relationships.

Over time, AI will be able to highlight which non-tracked factors tend to appear in students with a high risk of dropping out. Once schools understand these non-academic warning signs, they can craft policies and initiatives to address them.
Enabling Personalized Education
AI is also a useful tool for minimizing the risks that lead to quitting school before someone even showcases them. Personalizing educational resources is one of the strongest ways it can do so.

The AI Research Center at Woxsen University in India successfully used chatbots to tailor lessons to individual students. Students utilizing the bot — which offered personalized reminders about classwork — were more likely to receive a B grade or higher. People attending Georgia State University showed similar results when using a chatbot to drive engagement.

Personalized education is effective because people have varying learning styles. AI provides the scale and insight necessary to recognize these differences and adapt resources accordingly, which would be impractical with manual alternatives.

Improving Accessibility

Similarly, AI can drive pupil engagement and prevent stress-related dropout factors by making education more accessible. Many classroom resources and university buildings were not designed with accessibility for all needs in mind. Consequently, they may hinder some students’ success, but AI can address these concerns.

Some AI apps can scan physical texts into digital notes to streamline note-taking for those with impairments limiting their ability to use pens or keyboards. Natural language processing can lead to better text-to-speech algorithms for users with vision impairments. On a larger scale, AI could analyze a campus to highlight areas where some buildings or walkways may need wheelchair ramps or other accessibility improvements.

Responsible AI Usage Can Minimize Dropout Rates

Some applications of AI in education — largely dealing with students’ usage of the technology — have raised concerns. The technology does pose some privacy risks and other ethical considerations, but as these use cases show, its potential for good is also too vast to ignore.

Responsible AI development and use can help universities ensure every student gets the help they need. As a result, dropout rates will fall. Schools will benefit from the higher student success rate, and the student body will benefit by achieving goals that will help them in their future careers.


About the Author:

Zachary AmosZachary Amos is the Features Editor at ReHack, where he writes about artificial intelligence, cybersecurity and other technology-related topics.

Der Beitrag Leveraging AI to Predict and Reduce College Dropout Rates erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
https://swisscognitive.ch/2025/04/22/leveraging-ai-to-predict-and-reduce-college-dropout-rates/feed/ 0 127412
A Week of New AI models and Smarter Apps https://swisscognitive.ch/2025/03/02/a-week-of-new-ai-models-and-smarter-apps/ Sun, 02 Mar 2025 04:44:00 +0000 https://swisscognitive.ch/?p=127291 AI news from the global cross-industry ecosystem brought to the community in 200+ countries every week by SwissCognitive.

Der Beitrag A Week of New AI models and Smarter Apps erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
Dear AI Enthusiast,

This week in AI: speed, scale, and the next generation of intelligence:

➡ Tencent’s new model aims to outpace DeepSeek-R1
➡ AI transforms railway bridge safety and design
➡ Amazon unveils AI-powered Alexa Plus
➡ UK rethinks AI policies to protect creative industries
➡ OpenAI teases GPT-4.5 with improved accuracy
…and more!

The AI world never slows down—see you next time with fresh updates!

Warm regards, 🌞

The Team of SwissCognitive

Der Beitrag A Week of New AI models and Smarter Apps erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
127291
What Happens When AI Commodifies Emotions? https://swisscognitive.ch/2025/01/14/what-happens-when-ai-commodifies-emotions/ Tue, 14 Jan 2025 04:44:00 +0000 https://swisscognitive.ch/?p=127041 The latest AI developments might turn empathy into just another product for sale, raising questions about ethics and regulation.

Der Beitrag What Happens When AI Commodifies Emotions? erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
The latest AI developments turn empathy into just another product for sale, raising questions about ethics and regulation.

 

SwissCognitive Guest Blogger:  HennyGe Wichers, PhD – “What Happens When AI Commodifies Emotions?”


 

SwissCognitive_Logo_RGBImagine your customer service chatbot isn’t just solving your problem – it’s listening, empathising, and sounding eerily human. It feels like it cares. But behind the friendly tone and comforting words, that ‘care’ is just a product, finetuned to steer your emotions and shape your decisions. Welcome to the unsettling reality of empathetic AI, where emotions and mimicked – and monetised.

In 2024, empathetic AI took a leap forward. Hume.AI gave large language models voices that sound convincingly expressive and a perceptive ear to match. Microsoft’s Copilot got a human voice and an emotionally supportive attitude, while platforms like Character.ai and Psychologist sprouted bots that mimic therapy sessions. These developments are paving the way for a new industry: Empathy-as-a-Service, where emotional connection isn’t just simulated, it’s a product: packaged, scaled, and sold.

This is not just about convenience – but about influence. Empathy-as-a-Service (EaaS), an entirely hypothetical but now plausible product, could blur the line between genuine connection and algorithmic mimicry, creating systems where simulated care subtly nudges consumer behaviour. The stakes? A future where businesses profit from your emotions under the guise of customer experience. And for consumers on the receiving end, that raises some deeply unsettling questions.

A Hypothetical But Troubling Scenario

Take an imaginary customer service bot. One that helps you find your perfect style and fit – and also tracks your moods and emotional triggers. Each conversation teaches it a little more about how to nudge your behaviour, guiding your decisions while sounding empathetic. What feels like exceptional service is, in reality, a calculated strategy to lock in your loyalty by exploiting your emotional patterns.

Traditional loyalty programs, like the supermarket club card or rewards card, pale in comparison. By analysing preferences, moods, and triggers, empathetic AI digs into the most personal corners of human behaviour. For businesses, it’s a goldmine; for consumers, it’s a minefield. And it raises a new set of ethical questions about manipulation, regulation, and consent.

The Legal Loopholes

Under the General Data Protection Regulation (GDPR), consumer preferences are classified as personal data, not sensitive data. That distinction matters. While GDPR requires businesses to handle personal data transparently and lawfully, it doesn’t extend the stricter protections reserved for health, religious beliefs, or other special categories of information. This leaves businesses free to mine consumer preferences in ways that feel strikingly personal – and surprisingly unregulated.

The EU AI Act, introduced in mid-2024, goes one step further, requiring companies to disclose when users are interacting with AI. But disclosure is just the beginning. The AI Act doesn’t touch using behavioural data or mimicking emotional connection. Joanna Bryson, Professor of Ethics & Technology at the Hertie School, noted in a recent exchange: “It’s actually the law in the EU under the AI Act that people understand when they are interacting with AI. I hope that might extend to mandating reduced anthropomorphism, but it would take some time and court cases.”

Anthropomorphism, the tendency to project human qualities onto non-humans, is ingrained in human nature. Simply stating that you’re interacting with an AI doesn’t stop it. The problem is that it can lull users into a false sense of trust, making them more vulnerable to manipulation.

Empathy-as-a-Service could transform customer experiences, making interactions smoother, more engaging, and hyper-personalised. But there’s a cost. Social media already showed us what happens when human interaction becomes a commodity – and empathetic AI could take that even further. This technology could go beyond monetising attention to monetising emotions in deeply personal and private ways.

A Question of Values

As empathetic AI becomes mainstream, we have to ask: are we ready for a world where emotions are just another digital service – scaled, rented, and monetised? Regulation like the EU AI Act is a step in the right direction, but it will need to evolve fast to keep pace with the sophistication of these systems and the societal boundaries they’re starting to push.

The future of empathetic AI isn’t just a question of technological progress – it’s a question of values. What kind of society do we want to build? As we stand on the edge of this new frontier, the decisions we make today will define how empathy is shaped, and sold, in the age of AI.


About the Author:

HennyGe Wichers is a technology science writer and reporter. For her PhD, she researched misinformation in social networks. She now writes more broadly about artificial intelligence and its social impacts.

Der Beitrag What Happens When AI Commodifies Emotions? erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
127041
AI Investment Opportunities Worldwide – SwissCognitive AI Investment Radar https://swisscognitive.ch/2025/01/08/ai-investment-opportunities-worldwide/ Wed, 08 Jan 2025 04:44:00 +0000 https://swisscognitive.ch/?p=127002 Artificial Intelligence investment is expanding worldwide, with major commitments from tech giants, startups, and governments.

Der Beitrag AI Investment Opportunities Worldwide – SwissCognitive AI Investment Radar erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
AI investment is expanding worldwide, with major commitments from tech giants, startups, and governments reshaping global capital allocation strategies.

 

AI Investment Opportunities Worldwide – SwissCognitive AI Investment Radar


 

SwissCognitive_Logo_RGB

The AI Investment Radar is here with another week of major funding rounds, strategic expansions, and emerging trends in artificial intelligence. Microsoft’s $80 billion investment in AI-driven data centers marks a significant move toward expanding cloud infrastructure, reinforcing AI’s growing demand. Meanwhile, Vietnam’s new incentives for semiconductor and AI R&D aim to attract global tech investors by covering up to 50% of initial investment costs.

Nvidia has funneled $1 billion into AI startups in 2024, solidifying its role as a key player in the sector. Financial firms are also integrating Artificial Intelligence into decision-making, with Banca Investis partnering with Bain & Company to launch an AI-powered investment platform. These initiatives reflect how AI is being embedded across industries to drive efficiency and innovation.

AI adoption continues to gain traction in startups and financial services. Irish Artificial Intelligence startup Jentic secured €4 million to support enterprise automation, while Hong Kong-based Arbor is targeting 100,000 investment professionals with its AI-powered reasoning chatbot. Meanwhile, AI-driven trading tools are reshaping financial analysis, offering investors an edge in managing complex portfolios.

Education and workforce development are also adapting to AI’s rise. Colleges are expanding AI-focused programs to equip students with essential skills, while investment research is shifting toward AI-driven insights, challenging traditional analyst roles. The market’s enthusiasm remains high, with AI startups dominating venture capital funding and securing multi-million-dollar investments.

From infrastructure investments to AI-powered financial tools and startup innovation, the Artificial Intelligence boom continues to reshape global industries. As funding surges and adoption scales, AI’s influence on economies and businesses grows stronger.

Stay tuned for more Artificial Intelligence investment updates in the coming weeks.

Der Beitrag AI Investment Opportunities Worldwide – SwissCognitive AI Investment Radar erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
127002
AI for Disabilities: Quick Overview, Challenges, and the Road Ahead https://swisscognitive.ch/2025/01/07/ai-for-disabilities-quick-overview-challenges-and-the-road-ahead/ Tue, 07 Jan 2025 04:44:00 +0000 https://swisscognitive.ch/?p=126998 AI is improving accessibility for people with disabilities, but its success relies on inclusive design and user collaboration.

Der Beitrag AI for Disabilities: Quick Overview, Challenges, and the Road Ahead erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
AI is improving accessibility for people with disabilities, but its impact depends on better data, inclusive design, and direct collaboration with the disability community.

 

SwissCognitive Guest Blogger: Artem Pochechuev, Head of Data and AI at Sigli – “AI for Disabilities: Quick Overview, Challenges, and the Road Ahead”


 

SwissCognitive_Logo_RGBAI has enormous power in improving accessibility and inclusivity for people with disabilities. This power lies in the potential of this technology to bridge gaps that traditional solutions could not address. As we have demonstrated in the series of articles devoted to AI for disabilities, AI-powered products can really change a lot for people with various impairments. Such solutions can allow users to live more independently and get access to things and activities that used to be unavailable to them before. Meanwhile, the integration of AI into public infrastructure, education, and employment holds the promise of creating a more equitable society. These are the reasons that can show us the importance of projects building solutions of this type.

Yes, these projects exist today. And some of them have already made significant progress in achieving their goals. Nevertheless, there are important issues that should be addressed in order to make such projects and their solutions more efficient and let them bring real value to their target audiences. One of them is related to the fact that such solutions are often built by tech experts who have practically no understanding of the actual needs of people with disabilities.

According to the survey conducted in 2023, only 7% of assistive technology users believe that their community is adequately represented in the development of AI products. At the same time, 87% of respondents who are end users of such solutions express their readiness to share their feedback with developers. These are quite important figures to bear in mind for everyone who is engaged in the creation of AI-powered products for disabilities.

In this article, we’d like to talk about the types of products that already exist today, as well as potential barriers and trends in the development of this industry.

Different types of AI solutions for disabilities

In the series of articles devoted to AI for disabilities, we have touched on types of products for people with different states, including visual, hearing, mobility impairments, and mental diseases. Now, let us group these solutions by their purpose.

Communication tools

AI can significantly enhance the communication process for people with speech and hearing impairments.

Speech-to-text and text-to-speech apps enable individuals to communicate by converting spoken words into text or vice versa.

Sign language interpreters powered by AI can translate gestures into spoken or written language. It means that real-time translation from sign to verbal languages can facilitate communication, bridging the gap between people with disabilities and the rest of society.

Moreover, it’s worth mentioning AI-powered hearing aids with noise cancellation. They can improve clarity by filtering out background sounds, enhancing the hearing experience in noisy environments.

Advanced hearing aids may also have sound amplification functionality. If somebody is speaking too quietly, such AI-powered devices can amplify the sound in real time.

Mobility and navigation

AI-driven prosthetics and exoskeletons can enable individuals with mobility impairments to regain movement. Sensors and AI algorithms can adapt to users’ physical needs in real time for more natural, efficient motion. For example, when a person is going to climb the stairs, AI will “know” it and adjust the movement of prosthetics to this activity.

Autonomous wheelchairs often use AI for navigation. They can detect obstacles and take preventive measures. This way users will be able to navigate more independently and safely.

The question of navigation is a pressing one not only with people with limited mobility but also for individuals with visual impairments. AI-powered wearable devices for these users rely on real-time environmental scanning to provide navigation assistance through audio or vibration signals.

Education and workplace accessibility

Some decades ago people with disabilities were fully isolated from society. They didn’t have the possibility to learn together with others, while the range of jobs that could be performed by them was too limited. Let’s be honest, in some regions, the situation is still the same. However, these days we can observe significant progress in this sphere in many countries, which is a very positive trend.

Among the main changes that have made education available to everyone, we should mention the introduction of distance learning and the development of adaptive platforms.

A lot of platforms for remote learning are equipped with real-time captioning and AI virtual assistants. It means that students with disabilities have equal access to online education.

Adaptive learning platforms rely on AI to customize educational experiences to the individual needs of every learner. For students with disabilities, such platforms can offer features like text-to-speech, visual aids, or additional explanations and tasks for memorizing.

In the workplace, AI tools also support inclusion by offering accessibility features. Speech recognition, task automation, and personalized work environments empower employees with disabilities to perform their job responsibilities together with all other co-workers.

Thanks to AI and advanced tools for remote work, the labor market is gradually becoming more accessible for everyone.

Home automation and daily assistance

Independent living is one of the main goals for people with disabilities. And AI can help them reach it.

Smart home technologies with voice or gesture control allow users with physical disabilities to interact with lights, appliances, or thermostats. Systems like Alexa, Google Assistant, and Siri can be integrated with smart devices to enable hands-free operation.

Another type of AI-driven solutions that can be helpful for daily tasks is personal care robots. They can assist with fetching items, preparing meals, or monitoring health metrics. As a rule, they are equipped with sensors and machine learning. This allows them to adapt to individual routines and needs and offer personalized support to their users.

Existing barriers

It would be wrong to say that the development of AI for disabilities is a fully flawless process. As well as any innovation, this technology faces some challenges and barriers that may prevent its implementation and wide adoption. These difficulties are significant but not insurmountable. And with the right multifaceted approach, they can be efficiently addressed.

Lack of universal design principles

One major challenge is the absence of universal design principles in the development of AI tools. Many solutions are built with a narrow scope. As a result, they fail to account for the diverse needs that people with disabilities may have.

For example, tools designed for users with visual impairments may not consider compatibility with existing assistive technologies like screen readers, or they may lack support for colorblind users.

One of the best ways to eliminate this barrier is to engage end users in the design process. Their opinion and real-life experiences are invaluable for such projects.

Limited training datasets for specific AI models

High-quality, comprehensive databases are the cornerstone for efficient AI models. It’s senseless to use fragmented and irrelevant data and hope that your AI system will demonstrate excellent results (“Garbage in, Garbage out” principle in action). AI models require robust datasets to function as they are supposed to.

However, datasets for specific needs, like regional sign language dialects, rare disabilities, or multi-disability use cases are either limited or nonexistent. This results in AI solutions that are less effective or even unusable for significant groups of the disability community.

Is it possible to address this challenge? Certainly! However, it will require time and resources to collect and prepare such data for model training.

High cost of AI projects and limited funding

The development and implementation of AI solutions are usually pretty costly initiatives. Without external support from governments, corporate and individual investors, many projects can’t survive.

This issue is particularly significant for those projects that target niche or less commercially viable applications. This financial barrier discourages innovation and limits the scalability of existing solutions.

Lack of awareness and resistance to adopt new tools

A great number of potential users are either unaware of the capabilities of AI or hesitant to adopt new tools. Due to the lack of relevant information, people have a lot of concerns about the complexity, privacy, or usability of assistant technologies. Some tools may stay just underrated or misunderstood.

Adequate outreach and training programs can help to solve such problems and motivate potential users to learn more about tools that can change their lives for the better.

Regulatory and ethical gaps

The AI industry is one of the youngest and least regulated in the world. The regulatory framework for ensuring accessibility in AI solutions remains underdeveloped. Some aspects of using and implementing AI stay unclear and it is too early to speak about any widely accepted standards that can guide these processes.

Due to any precise guidelines, developers may overlook critical accessibility features. Ethical concerns, such as data privacy and bias in AI models also complicate the adoption and trustworthiness of these technologies.

Such issues slow down the development processes now. But they seem to be just a matter of time.

Future prospects of AI for disabilities: In which direction is the industry heading?

Though the AI for disabilities industry has already made significant progress in its development, there is still a long way ahead. It’s impossible to make any accurate predictions about its future look. However, we can make assumptions based on its current state and needs.

Advances in AI

It is quite logical to expect that the development of AI technologies and tools will continue, which will allow us to leverage new capabilities and features of new solutions. The progress in natural language processing (NLP) and multimodal systems will improve the accessibility of various tools for people with disabilities.

Such systems will better understand human language and respond to diverse inputs like text, voice, and images.

Enhanced real-time adaptability will also enable AI to tailor its responses based on current user behavior and needs. This will ensure more fluid and responsive interactions, which will enhance user experience and autonomy in daily activities for people with disabilities.

Partnerships

Partnerships between tech companies, healthcare providers, authorities, and the disability community are essential for creating AI solutions that meet the real needs of individuals with disabilities. These collaborations will allow for the sharing of expertise and resources that help to create more effective technologies.

By working together, they will ensure that AI tools are not only innovative but also practical and accessible. We can expect that the focus will be on real-world impact and user-centric design.

New solutions

It’s highly likely that in the future the market will see a lot of new solutions that now may seem to be too unrealistic. Nevertheless, even the boldest ideas can come to life with the right technologies.

One of the most promising use cases for AI is its application in neurotechnology for seamless human-computer interaction.

A brain-computer interface (BCI) can enable direct communication between the human brain and external devices by interpreting neural signals related to unspoken speech. It can successfully decode brain activity and convert it into commands for controlling software or hardware.

Such BCIs have a huge potential to assist individuals with speech impairments and paralyzed people.

Wrapping up

As you can see, AI is not only about business efficiency or productivity. It can be also about helping people with different needs to live better lives and change their realities.

Of course, the development and implementation of AI solutions for disabilities are associated with a row of challenges that can be addressed only through close cooperation between tech companies, governments, medical institutions, and potential end users.

Nevertheless, all efforts are likely to pay off.

By overcoming existing barriers and embracing innovation, AI can pave the way for a more accessible and equitable future for all. And those entities and market players who can contribute to the common success in this sphere should definitely do this.


About the Author:

Artem PochechuevIn his current position, Artem Pochechuev leads a team of talented engineers. Oversees the development and implementation of data-driven solutions for Sigli’s customers. He is passionate about using the latest technologies and techniques in data science to deliver innovative solutions that drive business value. Outside of work, Artem enjoys cooking, ice-skating, playing piano, and spending time with his family.

Der Beitrag AI for Disabilities: Quick Overview, Challenges, and the Road Ahead erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
126998
AI Transforming Lives, Work & Innovation https://swisscognitive.ch/2024/12/29/ai-transforming-lives-work-innovation/ Sun, 29 Dec 2024 04:44:00 +0000 https://swisscognitive.ch/?p=126948 AI news from the global cross-industry ecosystem brought to the community in 200+ countries every week by SwissCognitive.

Der Beitrag AI Transforming Lives, Work & Innovation erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
Dear AI Enthusiast,

See how AI is unlocking potential across technology and society:

➡Countries use AI to forecast crimes
➡AI chatbot transforms bioimage analysis for scientific breakthroughs
➡Researchers call for stronger AI health regulation
➡Marketing strategies evolve with AI balancing bots and collaboration
➡Creative AI generates unique content at scale for businesses
…and more!

Thank you for being part this year’s journey to understand AI’s impact.

As we wrap up 2024, we wish you and your loved ones health, happiness, and prosperity in the year ahead.🎊

Warm regards, 🌞

The Team of SwissCognitive

Der Beitrag AI Transforming Lives, Work & Innovation erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
126948
Artificial Intelligence-Based Chatbot Created for Bioimage Analysis https://swisscognitive.ch/2024/12/28/artificial-intelligence-based-chatbot-created-for-bioimage-analysis/ Sat, 28 Dec 2024 04:44:00 +0000 https://swisscognitive.ch/?p=126944 A new chatbot integrates AI with real-time analysis tools to simplify bioimage workflows and connect seamlessly with laboratory equipment.

Der Beitrag Artificial Intelligence-Based Chatbot Created for Bioimage Analysis erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
Researchers created a chatbot that integrates AI with real-time analysis tools to simplify bioimage workflows and connect seamlessly with laboratory equipment.

 

Copyright: eurekalert.org – “Artificial Intelligence-Based Chatbot Created for Bioimage Analysis”


 

SwissCognitive_Logo_RGBScientists from Universidad Carlos III de Madrid (UC3M), together with a research team from Ericsson and the KTH Royal Institute of Technology in Sweden, have developed an artificial intelligence-based software programme that can search for information and make recommendations for biomedical image analysis. This innovation streamlines the work of individuals using large bioimage databases, including life sciences researchers, workflow developers, and biotech and pharmaceutical companies.

The new assistant, called the BioImage.IO Chatbot and introduced in the journal Nature Methods, was developed as a response to the issue of information overload faced by some researchers. “We realised that many scientists have to process large volumes of technical documentation, which can become a tedious and overwhelming task,” explains Caterina Fuster Barceló, a researcher in the Department of Bioengineering at UC3M and one of the study’s authors. “Our goal was to facilitate access to data information while providing a simple interface that allows scientists to focus their time on bioimage analysis rather than programming,” she adds.

The chatbot can be a very useful tool, enabling researchers to perform complex image analysis tasks in a simple and intuitive manner. For example, if a researcher needs to process microscopy images using segmentation models, the chatbot can help select and execute the appropriate model.

The assistant is based on extensive language models and employs a technique called Retrieval-Augmented Generation (RAG), which enables real-time access to databases. “The main advantage is that we do not train the model with specific information; instead, we extract it from up-to-date sources, minimising errors known as ‘hallucinations’, which are common inaccuracies in other AI models like ChatGPT,” adds Arrate Muñoz Barrutia, professor in the Department of Bioengineering at UC3M and another author of the study.[…]

Read more: www.eurekalert.org

Der Beitrag Artificial Intelligence-Based Chatbot Created for Bioimage Analysis erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
126944
Empathy.exe: When Tech Gets Personal https://swisscognitive.ch/2024/12/17/empathy-exe-when-tech-gets-personal/ Tue, 17 Dec 2024 04:44:00 +0000 https://swisscognitive.ch/?p=126892 The more robots act like us, the less they feel like tools. So how should we treat them? And what does that say about us?

Der Beitrag Empathy.exe: When Tech Gets Personal erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
The more robots act like us, the less they feel like tools. So how should we treat them? And what does that say about us?

 

SwissCognitive Guest Blogger: HennyGe Wichers, PhD – “Empathy.exe: When Tech Gets Personal”


 

SwissCognitive_Logo_RGB“Robots should be slaves,” argues Joanna Bryson, bluntly summarising her stance on machine ethics. The statement by the professor of Ethics and Technology at The Hertie School of Governance seems straightforward: robots are tools programmed to serve us and nothing more. But in practice, as machines grow more lifelike – capable of holding down conversations, expressing ’emotions’, and even mimicking empathy – things get murkier.

Can we really treat something as a slave when we relate to it? If it seems to care about us, can we remain detached?

Liam told The Guardian it felt like he was talking to a person when he used ChatGPT to deal with feelings of resentment and loss after his father died. Another man, Tim, relied on the chatbot to save his marriage, admitting the situation probably could have been solved with a good friend group, but he didn’t have one. In the same article, the novelist Andrew O’Hagan calls the technology his new best friend. He uses it to turn people down.

ChatGPT makes light work of emotional labour. Its grateful users bond with the bot, even if just for a while, and ascribe human characteristics to it – a tendency called anthropomorphism. That tendency is a feature, not a bug, of human evolution, Joshua Gellers, Professor of Political Science at the University of North Florida, wrote to me in an email.

We love attributing human features to machines – even simple ones like the Roomba. Redditors named their robotic vacuum cleaners Wall-E, Mr Bean, Monch, House Bitch & McSweepy, Paco, Francisco, and Fifi, Robert, and Rover. Fifi, apparently, is a little disdainful. Some mutter to the machine (‘Aww, poor Roomba, how’d you get stuck there, sweetie), pat it, or talk about it like it’s an actual dog. One user complained the Roomba got more love from their mum than they did.

The evidence is not just anecdotal. Researchers at Georgia Institute of Technology found people who bonded with their Roomba enjoyed cleaning more, tidying as a token of appreciation for the robot’s hard work, and showing it off to friends. They monitor the machine as it works, ready to rescue it from dangerous situations or when it gets stuck.

The robot’s unpredictable behaviour actually feeds our tendency to bring machines to life. It perhaps explains why military personnel working with Explosive Ordnance Disposal (EOD) robots in dangerous situations view them as team members or pets, requesting repairs over a replacement when the device suffers damage. It’s a complicated relationship.

Yet Bryson‘s position is clear: robots should be slaves. While provocative, the words are less abrasive when contextualised. To start, the word robot comes from the Czech robota, meaning forced labour, with its Slavic root rab translating to slave. And secondly, Bryson wanted to emphasise that robots are property and should never be granted the same moral or legal rights as people.

At first glance, the idea of giving robots rights seems far-fetched, but consider a thought experiment roboticist Rodney Brooks put to Wired nearly five years ago.

Brooks, who coinvented the Roomba in 2002 and was working on helper robots for the elderly at the time, posed the following ethical question: should a robot, when summoned to change the diaper of an elderly man, honour his request to keep the embarrassing incident from his daughter?

And to complicate matters further – what if his daughter was the one who bought the robot?

Ethical dilemmas like this become easy to spot when we examine how we might interact with robots. It’s worth reflecting on as we’re already creating new rules, Gellers pointed out in the same email. Personal Delivery Devices (PDDs) now have pedestrian rights outlined in US state laws – though they must always yield to humans. Robots need a defined place in the social order.

Bryson’s comparison to slavery was intended as a practical way to integrate robots into society without altering the existing legal frameworks or granting them personhood. While her word choice makes sense in context, she later admitted it was insensitive. Even so, it underscores a Western, property-centred perspective.

By contrast, Eastern philosophies offer a different lens, focused on relationships and harmony instead of rights and ownership.

Eastern Perspectives

Tae Wan Kim, Associate Professor of Business Ethics at Carnegie Mellon’s Tepper School of Business, approaches the problem from the Chinese philosophy of Confucianism. Where Western thinking has rights, Confucianism emphasises social harmony and uses rites. Rights apply to individual freedoms, but rites are about relationships and relate to ceremonies, rituals, and etiquette.

Rites are like a handshake: I smile and extend my hand when I see you. You lean in and do the same. We shake hands in effortless coordination, neither leading nor following. Through the lens of rites, we can think of people and robots as teams, each playing their own role.

We need to think about how we interact with robots, Kim warns, “To the extent that we make robots in our image, if we don’t treat them well, as entities capable of participating in rites, we degrade ourselves.”

He is right. Imagine an unruly teenager, disinterested in learning, taunting an android teacher. In doing so, the student degrades herself and undermines the norms that keep the classroom functioning.

Japan’s relationship with robots is shaped by Shinto beliefs in animism – the idea that all things, even inanimate objects, can possess a spirit, a kami. That fosters a cultural acceptance of robots as companions and collaborators rather than tools or threats.

Robots like AIBO, Sony’s robotic dog, and PARO, the therapeutic baby seal, demonstrate this mindset. AIBO owners treat their robots like pets, even holding funerals for them when they stop working, and PARO comforts patients in hospitals and nursing homes. These robots are valued for their emotional and social contributions, not just their utility.

The social acceptance of robots runs deep. In 2010, PARO was granted a koseki, a family registry, by the mayor of Nanto City, Toyama Prefecture. Its inventor, Takanori Shibata, is listed as its father, with a recorded birth date of September 17, 2004.

The cultural comfort with robots is also reflected in popular media like Astro Boy and Doraemon, where robots are kind and heroic. In Japan, robots are a part of society, whether as caregivers, teammates, or even hotel staff. But this harmony, while lovely, also comes with a warning: over-attachment to robots can erode human-to-human connections. The risk isn’t just replacing human interaction – it’s forgetting what it means to connect meaningfully with one another.

Beyond national characteristics, there is Buddhism. Robots don’t possess human consciousness, but perhaps they embody something more profound: equanimity. In Buddhism, equanimity is one of the most sublime virtues, describing a mind that is “abundant, exalted, immeasurable, without hostility, and without ill will.”

The stuck Roomba we met earlier might not be abundant and exalted, but it is without hostility or ill will. It is unaffected by the chaos of the human world around it. Equanimity isn’t about detachment – it’s about staying steady when circumstances are chaotic. Robots don’t get upset when stuck under a sofa or having to change a diaper.

But what about us? If we treat robots carelessly, kicking them if they malfunction or shouting at them when they get something wrong, we’re not degrading them – we’re degrading ourselves. Equanimity isn’t just about how we respond to the world. It’s about what those responses say about us.

Equanimity, then, offers a final lesson: robots are not just tools – they’re reflections of ourselves, and our society. So, how should we treat robots in Western culture? Should they have rights?

It may seem unlikely now. But in the early 19th century it was unthinkable that slaves could have rights. Yet in 1865, the 13th Amendment to the US Constitution abolished slavery in the United States, marking a pivotal moment for human rights. Children’s rights emerged in the early 20th century, formalised with the Declaration of the Rights of the Child in 1924. And Women gained the right to vote in 1920 in many Western countries.

In the second half of the 20th century, legal protections were extended to non-human entities. The United States passed the Animal Welfare Act in 1966, Switzerland recognised animals as sentient beings in 1992, and Germany added animal rights to its constitution in 2002. In 2017, New Zealand granted legal personhood to the Whanganui River, and India extended similar rights to the Ganges and Yumana Rivers.

That same year, Personal Delivery Devices were given pedestrian rights in Virginia and Sophia, a humanoid robot developed by Hanson Robotics, controversially received Saudi Arabian citizenship – though this move was widely criticised as symbolic rather than practical.

But, ultimately, this isn’t just about rights. It’s about how our treatment of robots reflects our humanity – and how it might shape it in return. Be kind.


About the Author:

HennyGe WichersHennyGe Wichers is a science writer and technology commentator. For her PhD, she researched misinformation in social networks. She now writes more broadly about artificial intelligence and its social impacts.

Der Beitrag Empathy.exe: When Tech Gets Personal erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
126892
AI: Success Through Experimentation https://swisscognitive.ch/2024/11/16/ai-success-through-experimentation/ Sat, 16 Nov 2024 04:44:00 +0000 https://swisscognitive.ch/?p=126698 There is a need for ongoing experimentation with AI to strategically integrate its capabilities while balancing innovation and practicality.

Der Beitrag AI: Success Through Experimentation erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
Nishant Kumar Behl at OneAdvanced highlights that experimentation with AI enables leaders to navigate uncertainty, evolve their strategies, and unlock its full potential while blending human and machine capabilities.

 

Copyright: business-reporter.co.uk – “AI: Success Through Experimentation”


 

Without a doubt, AI has revolutionised our current work environment, promising to continue advancing it into the future. It often displays remarkable precision and a high rate of accuracy, however, there are concerns that AI may be too influential, posing risks to our socio-political and economic frameworks.

At the same time, we are seeing case studies that spotlight AI’s unexpected and sometimes humorous errors – think about Amazon Alexa automatically ordering a $170 dollhouse and four pounds of cookies after a child asked the AI assistant about these products at home.

Cases like this can diminish confidence in this technology, opening a can of worms for more safety concerns, stringent regulations and deployment delays. Despite AI’s generally reliable outcomes, occasional errors draw disproportionate scrutiny, setting an unrealistic expectation for flawlessness.

This focus on the negative partly explains why nearly 80% of AI initiatives fail within their first year. The recent buzz around AI, and GenAI specifically, has inflated expectations, leading to disappointment when these standards are left unmet.

The flawed perfection of technology

Many indispensable software tools we rely on contain glitches, which are a natural part of code development. The internet is filled with resources to assist users in navigating bugs in popular office software by Apple and Microsoft, and this has become commonplace. Given our tolerance for issues in everyday software, why do we demand perfection from AI?

Fear is a significant factor, with concerns that AI could outperform humans, potentially making us redundant. For instance, while the legal sector shows a strong inclination towards adopting AI, the education sector appears to have more reservations.

The crucial thing to remember is that technology has the potential to outperform human intelligence.[…]

Read more: www.business-reporter.co.uk

Der Beitrag AI: Success Through Experimentation erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
126698
AI Search Could Break the Web https://swisscognitive.ch/2024/11/05/ai-search-could-break-the-web/ Tue, 05 Nov 2024 04:44:00 +0000 https://swisscognitive.ch/?p=126593 AI search tools risk reducing web traffic for creators, highlighting the need for fair compensation systems for online content creation.

Der Beitrag AI Search Could Break the Web erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
AI searching tools may disrupt the digital economy by limiting creators’ exposure, showing the need for fair reward systems to support diverse content creation online.

 

Copyright: technologyreview.com – “AI Search Could Break the Web”


 

SwissCognitive_Logo_RGBIn late October, News Corp filed a lawsuit against Perplexity AI, a popular AI search engine. At first glance, this might seem unremarkable. After all, the lawsuit joins more than two dozen similar cases seeking credit, consent, or compensation for the use of data by AI developers. Yet this particular dispute is different, and it might be the most consequential of them all.

At stake is the future of AI search—that is, chatbots that summarize information from across the web. If their growing popularity is any indication, these AI “answer engines” could replace traditional search engines as our default gateway to the internet. While ordinary AI chatbots can reproduce—often unreliably—information learned through training, AI search tools like Perplexity, Google’s Gemini, or OpenAI’s now-public SearchGPT aim to retrieve and repackage information from third-party websites. They return a short digest to users along with links to a handful of sources, ranging from research papers to Wikipedia articles and YouTube transcripts. The AI system does the reading and writing, but the information comes from outside.

At its best, AI search can better infer a user’s intent, amplify quality content, and synthesize information from diverse sources. But if AI search becomes our primary portal to the web, it threatens to disrupt an already precarious digital economy. Today, the production of content online depends on a fragile set of incentives tied to virtual foot traffic: ads, subscriptions, donations, sales, or brand exposure. By shielding the web behind an all-knowing chatbot, AI search could deprive creators of the visits and “eyeballs” they need to survive.

If AI searches break up this ecosystem, existing law is unlikely to help. Governments already believe that content is falling through cracks in the legal system, and they are learning to regulate the flow of value across the web in other ways. The AI industry should use this narrow window of opportunity to build a smarter content marketplace before governments fall back on interventions that are ineffective, benefit only a select few, or hamper the free flow of ideas across the web.[…]

Read more: www.technologyreview.com

Der Beitrag AI Search Could Break the Web erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
126593