API Archives - SwissCognitive | AI Ventures, Advisory & Research https://swisscognitive.ch/technology/api/ SwissCognitive | AI Ventures, Advisory & Research, committed to Unleashing AI in Business Fri, 21 Jun 2024 11:22:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.8 https://i0.wp.com/swisscognitive.ch/wp-content/uploads/2021/11/cropped-SwissCognitive_favicon_2021.png?fit=32%2C32&ssl=1 API Archives - SwissCognitive | AI Ventures, Advisory & Research https://swisscognitive.ch/technology/api/ 32 32 163052516 Generative AI As A Service: An Emerging Frontier In Small Business Banking https://swisscognitive.ch/2024/06/24/generative-ai-as-a-service-an-emerging-frontier-in-small-business-banking/ Mon, 24 Jun 2024 03:44:00 +0000 https://swisscognitive.ch/?p=125651 AI service offerings are transforming small business banking by providing tailored financial tools and AI-driven marketing solutions.

Der Beitrag Generative AI As A Service: An Emerging Frontier In Small Business Banking erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>

AI service offerings are transforming small business banking by providing tailored financial tools and AI-driven marketing solutions, positioning banks as essential tech partners.

 

Copyright: forbes.com – “Generative AI As A Service: An Emerging Frontier In Small Business Banking”


 

SwissCognitive_Logo_RGBSmall-business owners (SMBs) face many challenges juggling multiple roles—engaging with customers, hiring and retaining employees, closing sales and managing finances. Small-business bankers often ask how they can deepen their relationships with this coveted cohort that allows them to differentiate their offerings and create trusted partnerships additive to their financial solution offerings.

The pace of technology change adds to the challenge these business owners face—particularly with rapidly evolving technology such as generative AI, and this is where banks can lean in with these customers to offer solutions. Unlike larger enterprises, small businesses typically lack the expertise and capital to invest in advanced technologies. This is where banks can help SMBs adopt these technologies.

By leveraging generative AI, banks can offer innovative, specifically tailored tools. While traditional solutions include API-based interfaces into accounting systems, integrated treasury management solutions and even free co-working spaces, the next frontier may be leveraging artificial intelligence as a service (AIaaS).

The Power Of Generative AI For Small Businesses

Generative AI can provide small businesses with a range of powerful tools, from personalized financial advisory services to dynamic financial planning and automated financial management. These solutions can analyze vast datasets to provide tailored advice, create financial models that evolve in real time and automate mundane tasks such as bookkeeping and invoice processing.

Beyond financial solutions, generative AI can enhance business operations by generating high-quality marketing content, handling customer inquiries through chatbots and virtual assistants and developing personalized marketing campaigns based on customer data analysis.

Positioning Banks As Technology Partners

By offering AIaaS, banks can position themselves not just as financial service providers but as essential technology partners for small businesses. This strategic positioning can set them apart from competitors.[…]

Read more: www.forbes.com

Der Beitrag Generative AI As A Service: An Emerging Frontier In Small Business Banking erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
125651
The AI-Driven Connected Enterprise https://swisscognitive.ch/2024/05/25/the-ai-driven-connected-enterprise/ Sat, 25 May 2024 03:44:00 +0000 https://swisscognitive.ch/?p=125496 AI-driven connected enterprises enhance data integration and streamline workflows using APIs and AI tools.

Der Beitrag The AI-Driven Connected Enterprise erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
AI-driven connected enterprises enhance data integration and streamline workflows using APIs and AI tools.

 

Copyright: forbes.com – “The AI-Driven Connected Enterprise” – APIs, RPA, Automation


 

SwissCognitive_Logo_RGBSuccessful and innovative enterprises are well connected. They are notably good at preparing and harnessing external data. Artificial intelligence (AI) can enhance sources, processes and workflows, making a well-run enterprise stronger, quicker and more competitive.

Being able to access and use data from customers, suppliers and other stakeholders is a good indicator of an organization’s capacity to make the right decisions. An externally informed mindset, according to the authors of a McKinsey report on innovative companies, is less vulnerable to biases and internal politics and enables rapid course-correction of strategies, R&D priorities and other initiatives.

Applied smartly, information can improve decision-making and erode inefficiencies. The right kind of data infrastructure is what enables a company “to break down (or at least perforate) silos,” as McKinsey puts it. What you need are integrated data connections, more structured data, and a platform or fabric that can unify workflows, tasks and analytics. All can benefit from AI.

Connectors And APIs

Data integration is a complex equation. To start with, enterprises use myriad application programming interfaces (APIs), typically paired with connectors, to link with data sources. Managing these sets is a challenge. One way we do so is through crowdsourcing, enabling the reuse and adaptation of capabilities.

Many of our clients are already familiar with the task-mining capabilities of robotic process automation (RPA) and AI/machine learning (ML) algorithms. But you also can use AI to build and manage your API infrastructure.

An emerging use case for generative AI (GenAI) is developing, optimizing and protecting APIs. (See, for instance, this Google Cloud session.) These kinds of deployments can, in turn, trigger a virtuous cycle: simplifying existing stacks of APIs, which make it easier to adopt more AI. The other prerequisite to using data is making sure that it’s in good order.[…]

Read more: www.forbes.com

Der Beitrag The AI-Driven Connected Enterprise erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
125496
AWS Summit 2024 – Scaling your AI Executive Toys https://swisscognitive.ch/2024/05/09/aws-summit-2024-scaling-your-ai-executive-toys/ Thu, 09 May 2024 03:44:00 +0000 https://swisscognitive.ch/?p=125419 The AWS Summit in London on 24 April 2024 was all about scaling Generative AI solutions. Read a quick roundup about it.

Der Beitrag AWS Summit 2024 – Scaling your AI Executive Toys erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
The Amazon Web Services (AWS) Summit in London on 24 April 2024 was all about scaling Generative AI solutions. Here are two questions to ask yourself before you try.

 

SwissCognitive Guest Blogger: Chris Sherrington – “AWS Summit 2024 – Scaling your AI Executive Toys”


 

SwissCognitive_Logo_RGBThis year’s Amazon Web Services (AWS) Summit in London was an excellent opportunity to catch up on the progress in Generative AI (Gen AI). The resounding message was clear: if 2023 was about Gen AI proofs of concept (PoCs), then 2024 is all about scaling up.

Before embarking on the journey of scaling any Gen AI PoC, you need to correctly answer two key questions. First, do we have the right approach to data and cloud infrastructure, the foundations for our AI initiatives? And second, are we actively ensuring that we scale something that truly adds value? Let’s take a look at each in turn.

Cloud and Data Capability

Public Cloud is the obvious place to turn the talking black box that wowed your CEO into something that could do things for a customer. Hence it was no surprise the Summit focused on demonstrating the power of Gen AI in AWS’s ever-expanding toolbox.

AWS Bedrock does a great job mounting several Foundational Models (FMs) in a way that makes them consumable and modular. These digital brains can be transformed into AI agents capable of choosing options and taking action. Native serverless functions like message queues identify providers, and API gateways give these brains the limbs and muscles to do real work.

But things could go awry if your organisation still perceives software as a capital asset. Gen AI is more like a perishable good, subject to evolution and demanding constant adaptation.

Anything you build will be outdated in a few cycles – right now that’s months. Plus, everything in the cloud rots faster to begin with. If your organisation has never built and managed a digital product, you will struggle with the pace of Gen AI – even to just maintain the status quo.

***

To get Gen AI right, you need to invest not only in building a product but also a growing capability of people, skills, and tribal knowledge. Think of that investment as the cost of a gym membership. Embrace it as a lifestyle change. And don’t forget the basics of cloud engineering – that’s as big a mistake as skipping leg day at the gym.

Your Gen AI will obviously be powered by your data, but your data may not be ready. I’m not talking about sophisticated and neatly labelled training data. I mean fundamental business data: lists of customers, product catalogues, employee credentials, and so on. Clean, high-quality sources held on suitably performant platforms. Do not underestimate how important that is: the point was made and made again in every use case presented at the Summit.

Useful AI products require useful data products (or as close as you can get with your current architecture). If your data strategy is as basic as a reporting strategy on top of an application strategy, you will struggle to feed your AI capabilities.

Your data supply chain is the bottleneck that will limit your scaling. Try to first scale in an area with teams that care about providing good data directly from their functions – and help them do it better still. But there is a caveat.

The trap of scaling Low value

The path of least resistance often leads to the place of lowest value. Many Gen AI PoCs were done at the margins of their enterprises due to safety concerns and the organisational inertia that any intrapreneur knows too well. There is a real risk of scaling something that is not worth it.

For example, I see a lot of HR-related use cases: chatbots that help employees understand policies or book time off. Using your employees avoids the discomfort of subjecting your customers to the new technology.

But that is also exactly what every HR system SaaS provider is working on. Unless you are an HR system SaaS provider, you are not learning anything useful from that Gen AI experiment. Something closer to how you create value for your customers would be a better choice.

Do not entertain sideshows just because they tick the right tech boxes. If you cannot describe the customer-relevant value streams, you lack sufficient strategic situational awareness. And if you don’t know when you are winning, why are you playing?

Key take-aways

Mature customers stand to gain a lot by productising FMs with AWS Bedrock. However, realising that potential requires a technical focus on modern data strategy. And alongside, you need a clear connection between your enterprise strategy and real-world value streams to see which areas are worth pursuing. Without that focus and clarity, your Gen AI PoCs will remain in the toybox.


About the Author:

Chris Sherrington_AWS Summit 2024 – Scaling your AI Executive ToysChris Sherrington has 20 years of experience in Enterprise Architecture, Technology Strategy and Innovation Leadership, making new tech work with old tech.

Der Beitrag AWS Summit 2024 – Scaling your AI Executive Toys erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
125419
Virtual Assistant Initiative (VAI) – Danish Hospital LLM Use Case https://swisscognitive.ch/2023/10/31/virtual-assistant-initiative-vai-danish-hospital-llm-use-case/ Tue, 31 Oct 2023 04:44:04 +0000 https://swisscognitive.ch/?p=123615 Hospitals can use safe and secure LLMs to improve patient processes, treatment outcomes and promote healthcare equality. LLM Use Case.

Der Beitrag Virtual Assistant Initiative (VAI) – Danish Hospital LLM Use Case erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
The Virtual Assistant uses an LLM to improve patient processes, treatment outcomes and promote healthcare equality. To manage risk, meet clinical, governance, and security requirements from the outset, and deploy the first LLM version rapidly, the solution is implemented in two stages. The first LLM version focuses on a use case for thyroid cancer patients in the ENT department in Rigshospitalet.

 

SwissCognitive Guest Blogger: Neil Oschlag-Michael – “Virtual Assistant Initiative (VAI) – Danish Hospital LLM Use Case”


 

The benefits of LLMs were already recognized when the Virtual Assistant initiative was conceived in early summer of 2023, but concerns about their safety remained. Existing and publicly available LLMs were trained on generic data and there wasn’t certainty that LLMs could answer specific questions from specific patient types in relation to specific hospital procedures correctly. This led to the VAI initiative, in which the Department of Otorhinolaryngology, Head and Neck Surgery & Audiology (ENT) and Innovations Center in Rigshospitalet collaborated with 2021.AI to implement an LLM, named the Virtual Assistant for patients, or Virtual Assistant for short.

Located in Copenhagen, Rigshospitalet is the largest and most specialized hospital in Denmark, serving 75,000 inpatients each year. As a critical center for teaching and research, Rigshospitalet prioritizes innovation in a continual effort to provide world-leading healthcare. The ENT department in Rigshospitalet is the biggest university department in Denmark and is also the center for surgical head-neck oncology. Rigshospitalet’s Innovation Center has capabilities and competences in design, innovation, anthropology, business development, communication and strategic partnerships. It has experience in clinical innovation in all phases and accelerates the development of new healthcare solutions that create value for patients. 2021.AI is a Danish company which specializes in AI and AI governance. 2021.AI helps companies around the globe to accelerate their AI adoption by delivering the three key components that any organization needs to manage, implement and run AI systems successfully; models, platform, and governance.

The initiative was partly funded by the Danish Life Science Cluster and one of the first requirements was to complete the project rapidly; in 10 weeks. To meet this deadline, manage risk and ensure compliance with clinical, governance and security standards from the outset, the project team decided to adopt a two-stage approach. During the first stage a project would be run for 10 weeks to deploy and test an LLM and document requirements for the full-blown implementation project, in the second stage.

The use case selected for the project was designed to meet several aims: to improve patients’ experiences, to improve hospital processes and outcomes and to promote equality. It entailed using an LLM to answer questions from patients, limiting initial scope to thyroid cancer patients, for whom an operation was planned in the ENT department. There are 400 suspected thyroid cancer patients per year, of which approximately 150 undergo thyroid surgery.

Thyroid cancer is a serious and deadly disease, where early and correct treatment is crucial for the patients’ survival and quality of life. One challenge within the healthcare sector is that many patients, including thyroid cancer patients, have a significant need for information, not least to prepare properly for operations. Hospitals provide this information, but some patients struggle with understanding it and preparing for an operation is challenging, when factors such as fasting, or medication can be crucial. Currently patients with questions or in doubt can contact the hospital, but there is no 24*7 response service. Patients with dyslexia, who make up 6-8% of all social classes and a higher percentage of weaker classes, are at an added disadvantage. Worst case, they do not receive a response in time and cannot prepare fully. This can result in cancellations, and it is estimated that 6-10% of these operations currently need to be postponed, affecting patients’ experiences or worsening their condition and increasing an already-burdened hospital workload.

To address these issues the project would develop a Virtual Assistant, which would use Rigshospitalet’s procedures and documents, as well as other credible sources, to answer pre-operative questions from thyroid cancer patients. The Virtual Assistant would respond to questions about the treatment process and provide patients with responses to issues they may not have understood or had the opportunity to raise during pre-operative consultations. In doing so it would also promote equity in healthcare, improve treatment readiness and improve outcomes.

There were four main LLM requirements. The LLM must only base its answers on approved input data, which consisted of documents provided by the hospital for this purpose. The LLM must answer questions correctly based on the input data. The LLM must not answer questions related to certain topics, for example for diagnoses or questions about death. And the LLM must indicate when answers cannot be synthesized from approved input data, for example by adding: “I couldn’t find any relevant information” to its response.

Solution architecture was not designed from scratch and based on an existing solution on 2021.AI’s GRACE platform: GRACE governance for LLMs, which addresses risks and concerns associated with the use of LLMs. For a start, GRACE could be deployed rapidly as a web service. GRACE micro service architecture is configurable and scalable and designed to meet changing requirements and it can be integrated with any public LLM which has an API or with a Local Open Source LLM. It offers the same functionality as leading public chat services, making the switch to GRACE LLM easy for LLM users, and it extends this with a governance framework to operationalize organization policies, ethical guidelines and regulations. GRACE provides a secure environment with role base access control. This solution provided transparency and accelerated the project, allowing data scientists to start working on prompt engineering, which is the process of tuning a prompt to achieve the desired LLM response is called prompt engineering, early in the project.

Project governance was ensured by anchoring the project within hospital management and coordinating closely with clinical staff, instead of managing it as a standalone IT project, only governed by technical roles. This ensured that project goals and decision making were aligned with hospital and clinical goals and testing was managed in close coordination with hospital and clinical staff. ENT department surgeons took part in testing, testing was extended to meeting new requirements raised during the process, and testing was not limited to a technical test which only compared LLM responses with the information available in the approved source documents.

The Virtual Assistant met LLM requirements. It demonstrated that it could retrieve information correctly, cite sources, communicate efficiently and did not respond to topics tagged as sensitive. Key lessons learned include the importance of data quality, robust test procedures and ensuring goverance from the outset. LLM efficacy is intricately linked to input data quality. Testing uncovered issues with outdated or ambiguous input data and the risk of potential misinterpretations in data sources. Expert feedback from surgeons and nurses provided invaluable insights for LLM performance and optimization. Governance is required to implement safe and responsible LLM systems. And while they are relevant for all successful projects, the need for a motivated team with shared goals, open communication, close collaboration and robust “traditional” project management, must be acknowledged. Without these, any project, let alone a rapid LLM implementation, would hardly succeed.

This solution differentiates itself by combining advanced technology, healthcare capabilities, AI governance and risk management to develop a safe solution for patients and hospitals. Patients can receive the help they need, when they need it, preventing their condition from worsening. This reduces the risk of cancelling or postponing operations, improves hospital efficiency and reduces the workload, and has the potential to relieve healthcare personnel and free up resources for more complex patient needs. The solution is scalable, and scope can be extended to include more patient groups, more use cases in more departments. Chat can be extended to include voice support, and governance requirements can be extended to include compliance with say, the EU AI act.


About the Author:

Neil Oschlag-Michael is a data scientist and AI strategy and governance consultant. He works with organizations to use AI effectively, efficiently, easily and not least ethically and responsibly.

Der Beitrag Virtual Assistant Initiative (VAI) – Danish Hospital LLM Use Case erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
123615
OpenAI’s Latest Updates Made ChatGPT and Dall-E 3 Stronger Than Ever https://swisscognitive.ch/2023/10/16/openai-updates-chatgpt-dalle-3/ Mon, 16 Oct 2023 12:00:35 +0000 https://swisscognitive.ch/?p=123436 The most recent OpenAI updates may have flown under the radar for some, but trying them firsthand is mind-blowing.

Der Beitrag OpenAI’s Latest Updates Made ChatGPT and Dall-E 3 Stronger Than Ever erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
The most recent OpenAI updates may have flown under the radar for some, but trying them firsthand is mind-blowing.

 

“OpenAI Updates Made ChatGPT and Dalle-3 Stronger Than Ever”


 

In the storm of AI tools, you might have already bumped into several articles and social media posts claiming that “ChatGPT is dead”, but unsurprisingly, they think it very differently. Since first propelling Dall-E and ChatGPT (both by OpenAI) into the spotlight, several events have unfolded in the Generative AI industry, with giants making their entrance. But apparently, they are not relinquishing their leading role just yet.

Starting with ChatGPT, in the last few months, newer and newer plugins popped up every day,  but especially in real-time browsing we’ve experienced disappointment due to error messages and inconsistent functionality from third-party AI tools. But recently this changed as OpenAI integrated a browsing option with Bing (for Plus users only). The provided results always have to be handled carefully and need to be researched, but this update already outperforms previous plugins. Notably, it uses references.

What better way to illustrate this than asking ChatGPT through Bing about OpenAI’s most recent updates?

ChatGPT Updates:

(by ChatGPT on 14.10.2023)

Browse Feature (September 27, 2023):

  • ChatGPT can now browse the internet to provide current and authoritative information with direct links to sources. This feature is available to all Plus users and is no longer limited to data before September 2021​1​.

Voice and Image Capabilities (September 25, 2023):

  • Voice (Beta) is now available to Plus users on iOS and Android, allowing back-and-forth voice conversations.
  • Image input is available to Plus users on all platforms. Users can now show ChatGPT images to troubleshoot issues, explore contents, or analyze complex graphs​2​.

Language Support (September 11, 2023):

  • Alpha support for multiple languages including Chinese, French, German, Italian, Japanese, Portuguese, Russian, and Spanish on web​2​.

ChatGPT Enterprise (August 28, 2023):

  • Launch of ChatGPT Enterprise offering enterprise-grade security, privacy, unlimited higher-speed GPT-4 access, longer context windows for processing longer inputs, advanced data analysis capabilities, and customization options​2​.

Custom Instructions (August 2023):

  • Custom instructions are now available to users in the EU, UK and to free users allowing them to customize interactions with ChatGPT​2​.

Voice and Image Analysis Capabilities:

  • The update also introduced voice and image analysis capabilities, making the interaction more intuitive​3​.

Being able to—and doing so—check the references is immensely helpful, and in this instance, it directs us to reliable pages, including OpenAI’s own Release Notes.

Quite useful results, through which you can also find the Open AI announcement published on 25 September, where you’ll find a detailed guide on how to use the new image and voice feature.

The chatbot also provided us with the news about Dall-E:

DALL-E Updates:

(by ChatGPT on 14.10.2023)

DALL-E 3 Launch (September 2023):

  • DALL-E 3 was announced with enhanced capabilities for understanding “significantly more nuance and detail” compared to previous iterations​4​.

Safety Measures:

  • DALL-E 3 continues the focus on safety by limiting the ability to generate violent, adult, or hateful content​5​.

Integration with Bing Image Creator:

  • DALL-E 3 was integrated into Bing Image Creator, facilitating the creation of realistic images and a wide range of artwork by following simple prompts​6​​7​.

Those with a keen eye may have already spotted the available integration with Dall-E 3, simplifying user experiences remarkably. Outweighing even this is Dall-E 3’s improved model and its ability to craft detailed images with mostly accurate spelling.

Here is an example of using Dall-E 3 inside ChatGPT.

Although it still contains mistakes, it is much improved from the previous model, making creative work significantly faster.

The quality of the human faces is much better, and the dreaded hand-sketching barrier has been eliminated.

The combination of creating these visuals just a click away from using GPT-4’s model for creating uncountable complex texts, codes and real-time searches makes it an even more valuable tool than it had ever been before.

Now, we only mentioned a few of OpenAI’s updates, but starting with the Custom Instructions, through documents readability, the supposed API updates, to Canva’s integration, there are several other exciting pieces of news to explore in more depth.

If you liked this article and would be interested in reading more like this, let us know!

Der Beitrag OpenAI’s Latest Updates Made ChatGPT and Dall-E 3 Stronger Than Ever erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
123436
The Evolution of Software: AI, NoCode, Blockchain’s Disruption of Traditional Programming https://swisscognitive.ch/2023/10/12/the-evolution-of-software-ai-nocode-blockchains-disruption-of-traditional-programming/ Thu, 12 Oct 2023 03:44:02 +0000 https://swisscognitive.ch/?p=123389 NoCode platforms, AI, and blockchain are reshaping the software landscape by introducing self-updating, self-fixing, and trusted applications.

Der Beitrag The Evolution of Software: AI, NoCode, Blockchain’s Disruption of Traditional Programming erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
This paper explores self-updating and self-fixing software, harnessing AI and innovative platforms based on NoCode super app builders. Trust emerges as a central theme, especially in autonomous software changes. To ensure reliability, we employ rigorous testing and blockchain-based verification. While technology advances, the enduring need for human trust remains at the forefront.

 

SwissCognitive Guest Blogger: Ivan Assenov – “The Evolution of Software: AI, NoCode, Blockchain’s Disruption of Traditional Programming”


 

In the modern era, traditional programming touches nearly every aspect of our lives, whether it’s through our work, the games we play, or the apps we use daily. Over the past 75 years, software has become ubiquitous, but it’s also gained a reputation for being notoriously buggy and expensive. This imperfection, combined with the high costs and time involved in creating software — especially at the enterprise level — means many software products struggle to maintain profitability despite their popularity. With consumers constantly seeking more for less, the software industry is in a cycle of relentless reinvention. The past decade has brought forth three revolutionary trends poised to redefine the landscape: NoCode/LowCode development, artificial intelligence, and blockchain. In this article, we’ll delve into how these three innovations, working in tandem, might shape the future of software.

If we break down a software cycle today on a very high level, it usually consists of having some requirements, some developers writing some code, then deployment is done and the software is available for use by the end users. There is so much more to this cycle known in the software industry as SDLC – Software Development Life Cycle but in a nutshell, it is not that complicated. And yet, tens of thousands of books had been written, certifications had been issued, and whole divisions in the corporate world had been devoted to dealing with the side effects of how software functions or malfunctions. From cybersecurity, compliance, DevOps, agile coaches, and scrum masters, a massive amount of cash is injected to keep the machine floating in order to sync all sides.

What if we are to optimize all of this chaos with a few simple steps that will fundamentally transform how software is written, maintained, and distributed?

Self-tuning software and self-releasing software

As the digital realm evolves, the demands on software grow exponentially. Picture a world where a software product, be it a sophisticated website or an engaging mobile app, has a high daily active user (DAU) count and the innate capability to evolve independently. What if this software could automatically optimize itself to enhance its DAU and /or Average Time on Page(ATP), enact AI-driven fixes, and autonomously deploy updates? The claim below is that such a vision is achievable through the combined prowess of AI, NoCode, and Blockchain.

To truly grasp this vision, let’s delineate its elements:

  • We aim to amplify already operational software, not start afresh.
  • The primary goal? Boost the DAU/ATP, potentially by refining the user interface or tailoring user experiences.
  • All modifications are AI-driven, ensuring adaptability and responsiveness.
  • These changes are not just theoretical; they are auto-deployed to the live environment in real-time.
  • Users remain at the forefront. Their experience is seamless, undisturbed by the ongoing evolution.
  • Finally, and most crucially, every autonomous change is TRUSTED, guaranteeing both functionality and security.

Diving deeper, we will explore how AI, NoCode platforms, and the immutable nature of Blockchain technologies can make such a futuristic vision a reality.

Below we will go over each one of them using AI, Nocode, and Blockchain technologies.

Prerequisites:

For software to truly be self-evolving, its origin is pivotal. It should be constructed using a leading-edge NoCode super app builder platform, complemented by its unique declarative language. Such a foundation heralds an age where software is not just functional but is also learning and adapting in real time.

For the system’s initial setup, human intervention will be crucial to fine-tune the preliminary system prompts. This hands-on approach will be maintained for the initial thousands of iterations to ensure optimal performance.

Enhancing DAU and ATP through UI automated Adjustments

The synergy of machine learning and generative AI holds immense potential in our mission to elevate the user experience. By weaving these AI technologies into metrics that evaluate DAU/ATP, site performance, and user interactions, we can incrementally perfect the user interface to resonate more deeply with users.

Let’s visualize a fundamental webpage design: it comprises a label, some text, and a button. Over successive releases, subtle modifications, be it in their positioning, color schemes, default settings, or shapes, are introduced. The ramifications of these changes on user engagement are then meticulously analyzed.

What sets this system apart is its foundation in NoCode. This means that the incorporation and fine-tuning of these elements can be actioned through APIs. Adding a layer of sophistication, the system has the capacity to roll out varying UI layouts across distinct geographical locales, thereby evaluating the sway of regional and cultural nuances on user behavior. With continuous feedback, the system hones in on an interface configuration that garners maximal user traction.

Each interaction a user has — ranging from the duration spent on tasks to the nuanced patterns of cursor or finger movements — is documented. When this treasure trove of data is aligned with its respective release version, it provides invaluable insights. It’s pivotal to implement the Semantic versioning approach, typified by its trio of numerals, ensuring smooth tracking and management of these iterations. For a seamless experience, it’s imperative that every integrated plugin or mini-app conforms to this versioning framework.

At the heart of the initiative lies the profound capabilities of AI. For us to realize seamless automation of these alterations, it’s vital to synchronize our Language Learning Model (LLM) with the API overseeing the creation and refinement of mini-apps within the NoCode super app ecosystem. To bolster the quality and range of the data, a strategic move would be to roll out diverse versions across multiple regions or user demographics. This would furnish us with a broad spectrum of data, enabling sharper insights into patterns of progression or setbacks.

Addressing the crucial aspect of security and privacy, there are a couple of pathways we can tread. The Retrieval-augmented generation (RAG) system offers one solution. Alternatively, a privately hosted LLM stands as another viable choice. By incorporating a dedicated embedding API interfacing with a vector database, we can assure the continuity of data updates. Currently, the accessibility of RAG is commendable, with numerous providers extending generative model services. Merging these with embedding functions and vector databases becomes a relatively straightforward task. For entities operating on leaner budgets or those with distinct requirements, more rudimentary machine-learning models are available, though they might lean more heavily on manual supervision. On the flip side, a privately hosted LLM could be an ideal fit for expansive enterprise networks that can bear the accompanying expenses.

Equipped with this newfound flexibility in deployment decisions, it’s crucial to recognize the looming challenge: maintaining trust. With every new change introduced, the overarching question remains—how will these alterations serve real-time users, especially if the entire operation is expected to function without human intervention?

There could be a few steps we need to establish:

Instead of simply recording primary events like clicks, scrolls, or page navigations, we advocate for a deeper capture of interactions. This should encompass touch actions on mobile and mouse actions on desktop, stored in a consistent key-value pair format. For instance:
{ “eventType”: “click”, “elementId”: “button123”, “timestamp”: “2023-10-06T14:30:00Z”, “location”: [x,y] }.

  • Textual Interpretation: Transform the structured key-value pairs into user-friendly textual descriptions. Using our earlier example, this might read as:
    “The user clicked on button123 at coordinates x,y.”
  • Text Embedding: Utilize Natural Language Processing (NLP) models such as Word2Vec, FastText, or even sophisticated variants like BERT to generate embeddings from these textual insights.
  • Vector Database Storage: Archive the created embeddings in a vector database, making sure they’re aptly indexed for swift retrieval.
  • AI-Driven Search Optimization: Employ AI to sift through the saved embeddings, pinpointing patterns from prior interactions. This enables the system to bolster positive user experiences or adjust in cases of identified regressions.
  • Feedback Loop for Continuous Enhancement: Introduce a feedback mechanism, consistently fine-tuning and retraining the system, ensuring an ongoing refinement in user interaction capture and interpretation.
  • Human Oversight: While the emphasis is on automation, the importance of human monitoring, especially in the nascent stages, cannot be overstated. It guarantees that system optimizations align with intended outcomes and maintain a human-centric approach.

Once all components are seamlessly aligned, a critical element remains for our autonomously managed and deployed model: establishing TRUST. Users need the assurance that the software autonomously deployed is not just operational, but is precise, unbiased, and safe.

Before delving into the process of establishing trust, let’s first discuss code coverage, an integral aspect that subsequently ties back to trust. Among the multitude of code coverage types (more than 15), Statement Coverage (C1) and Branch Coverage (C2) stand out as the most prominent.

Statement coverage assesses whether every code line has been addressed at least once during automated testing, while branch coverage evaluates how many control branch pathways (like ‘if’ statements) have been executed.

Today’s developers may only intermittently apply these tests, and even then, not to their fullest extent, leading to incomplete and outdated tests over time.

For genuine trust in automated releases, we must:

  • Ensure that both prevalent code coverage types are extensively addressed.
  • Independently verify that this coverage is trustworthy.

To accomplish this, we propose leveraging certified public blockchain transactions to log actual runs of C1 and C2 tests, introducing two levels of assurance: Level 1, where both C1 and C2 are 95% or higher, and Level 2, with both C1 and C2 at a full 100%.

It’s imperative that the chosen blockchain be public, affordable (costing mere thousands of pennies), adhere to the foundational Bitcoin protocol, and make no compromises. Currently, only a handful of blockchains meet these standards. This means in the future a new blockchain must be created to suit these activities.

Access to unit tests should be granted at the Blockchain node levels, accommodating those keen on mining blocks. Moreover, the declarative language fueling the NoCode platform components and plugins should be fully open-source. Furthermore, plugin codes derived from this declarative language should be accessible to miners upon registration.

Envision a scenario where every plugin, NoCode element, or mini-app possesses a certification indicating its trust level. This blockchain could either store the certification’s hash or its entire content sourced from the tests.

Below is an example of such a structure

Merging these components results in an autonomous system — externally observable, yet internally equipped and self-reliant to execute requisite tasks. As the digital realm undergoes swift metamorphosis, the bedrock of trust remains unshaken. While our drive for flawless automation is commendable, it should not eclipse the users’ innate need for reliability and security. Marrying meticulous testing methods with blockchain’s unwavering transparency, we transcend beyond just delivering software; we promise serenity. Poised at this technological precipice, our forward gaze is filled not just with eagerness but an unwavering pledge to honor the trust.

In this era dominated by code, the essence of trust remains inherently human.


About the Author:

Ivan Assenov‘s mastery of NoCode/LowCode has impacted hundreds of millions in the financial domain. Combining insights from high-frequency trade transactions with blockchain and generative AI systems, he stands as a software visionary.

Der Beitrag The Evolution of Software: AI, NoCode, Blockchain’s Disruption of Traditional Programming erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
123389
AI Ethics and Unconscious Bias https://swisscognitive.ch/2023/07/06/ai-ethics-and-unconscious-bias/ Thu, 06 Jul 2023 03:44:00 +0000 https://swisscognitive.ch/?p=122577 Exploring the critical intersection of AI and Ethics, we shed light on the importance of mitigating unconscious bias in data-driven systems.

Der Beitrag AI Ethics and Unconscious Bias erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
Enumerating AI principles to be used as a guide or a questionnaire before launching an AI system or during the development process.

 

SwissCognitive Guest Blogger: Shivi Gupta – “AI Ethics and Unconscious Bias”


 

Data science has been the buzz word since almost a decade now. We still don’t fully understand how Instagram knows the reels that we will like or YouTube, the videos we will be watching. Their recommendation engines have reached a stage where it is almost asking us to be lazy and just watch what they have to offer. They are taking away the choice and we are so mesmerised by the tech that we are comfortable this way.

Now let us talk about these recommendation engines and how they are knowing us so well. One of the 5 stages of data science is data collection. Here a firm collects data from public or private sources and “cleans” it as per its business acumen or requirement. This process is also called data mining. If this process isn’t monitored for transparency, diversity, privacy and integrity, it could have serious implications for the system that is in the market or about to be launched.

The need of ethics in AI is the need of the hour. We can’t let systems or automated scripts dictate us without any accountability. An AI system which hasn’t gone through the ethics or principles can give birth to bias or reiterate the unconscious bias within the society.

The stereotypes that exist in all machine learning data are like a nurse has to be a female, homemaker has to be a woman, a driver is always a man and so on. To make sure the Ai system is not upsetting a group, religion or a sect with its responses, a system has to follow all these following principles:

  • Integrity – ability to register complaint, admit and rectify the mistakes made.
  • Diversity – data set should be diverse enough to break sexist/racist/discriminatory stereotypes.
  • Robustness – should be able to identify any unconscious bias in the user ‘s prompt as well as the output it is producing.
  • Accountability – if an output is harmful, the creator or the company must be accountable and responsible for the output and its consequences.
  • Fairness – how fairly an algorithm was used, was the data biased against a particular group or community?
  • Transparency – an AI system must be able to clarify how it predicted a particular set of output from the prompt
  • Explainable – an AI should be able to explain how an output was reached and what techniques or algorithm were used. This can be tricky since too many technical details wouldn’t make sense to everyone.
  • Data privacy – the data used, has not disobeyed any data privacy laws of any government or country.

To remember all the principles better, I came up with the mnemonic I_DRAFTED which will cover all the principles. This is just a reference guide, any organization can amalgamate a few of the principles together and create their own mnemonic. This can help Data scientists ask the right questions before or during the process of developing an AI system. Intelligent questions is what separates a smart phone or responsible adult from an ordinary one.

References:

https://deon.drivendata.org/

https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4t6dA

https://www.gov.uk/government/publications/data-ethics-framework/data-ethics-framework-2020


About the Author:

Shivi GuptaShivi Gupta is a  passionate data scientist and full-stack developer, working in the industry for over a decade. An ardent Researcher and Innovator, who loves to automate things. Hobbies include travelling, eating, and playing sports (indoor and outdoor).

Der Beitrag AI Ethics and Unconscious Bias erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
122577
Unlocking the Generative AI Investment Frontier: Expert Q&A – Part 1 https://swisscognitive.ch/2023/06/21/unlocking-the-generative-ai-investment-frontier-expert-qa-part1/ Wed, 21 Jun 2023 07:28:34 +0000 https://swisscognitive.ch/?p=122433 Beny Rubinstein dives into the most fascinating questions of our "Generative AI: A New Frontier for VC Investments" virtual event.

Der Beitrag Unlocking the Generative AI Investment Frontier: Expert Q&A – Part 1 erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
In the wake of our transformative “Generative AI: A New Frontier for VC Investments” virtual event, the buzz of innovation, disruption, and opportunity continues to resonate. The discussions and insights shared by our experts inspired a myriad of queries from our audience. It’s clear that as we navigate this new frontier, there’s still so much to unravel about Generative AI and its implications for Venture Capital investments.

 

“Generative AI: A New Frontier for VC Investments” Q&A with Beny Rubinstein, Head of BV Israel, TIGER 21 Chair, TLV


 

Your questions reflected our curiosity, engagement, and the thirst for more knowledge, and we couldn’t leave them unanswered. Therefore, we’ve reached out to one of our esteemed speakers, Beny Rubinstein, to provide clarity and delve deeper into this fascinating subject.

In this Q&A article, we present Beny’s thoughtful and insightful responses to your questions. Covering a broad array of topics—from uses cases of Generative AI and venture capitalism through bias to LLM’s effect on education, this Q&A provides information for anyone keen to understand this dynamic technology and its role in shaping our future.

Whether you attended the event or are just catching up, we invite you to dive into the following article. Consider this a continuation of the conversation started at our event, an opportunity to revisit the frontier of Generative AI and deepen our understanding of this revolutionary technology. Let’s continue to explore, question, and pioneer together.

And this is just the beginning. We’re preparing yet another enlightening Q&A piece featuring insights from our other accomplished speakers. They’ll be addressing even more of your queries, shedding light on further complexities, and painting a broader picture of the Generative AI landscape.

Stay tuned for our next article, where we’ll be delving into more intricate facets of Generative AI, its potential impact on various sectors, and how it’s shaping VC investment strategies. Your curiosity fuels this journey, and together, we’re continuing to explore the breadth and depth of this transformative technology.

For the conference details, agenda, speaker line-up, and handouts CLICK HERE.
For the conference recording CLICK HERE

Q: Christopher Mott: If we are going to talk about productivity and operational use of AI INSIDE VCs, can we be concrete about what AI services and use cases — maybe using a lifecycle approach from POV of the VC? General benefit statements aren’t as useful as those we can read in the media or find on Google. Likewise, if APIs are important for the interoperability of operations in enterprises or other areas, what use cases and applications are being integrated to create a new and better way than the past?

 

[Beny Rubinstein]: Certainly! AI can provide valuable benefits to venture capital firms across various stages of the investment lifecycle. Here are some concrete use cases of AI in the context of venture capital:

Deal Scouting: AI-powered algorithms can analyze vast amounts of data from various sources such as news articles, social media, industry reports, and startup databases to identify potential investment opportunities. Natural Language Processing (NLP) techniques can extract relevant information and identify emerging trends, helping VCs discover promising startups more efficiently.

Due Diligence: AI can assist in the due diligence process by automating data analysis and pattern recognition. Machine Learning algorithms can analyze financial statements, market trends, and customer feedback to provide insights into a startup’s financial health, market potential, and competitive positioning. This helps VCs make more informed investment decisions and identify potential risks.

Market Analysis: By analyzing large datasets, AI algorithms can identify market trends, consumer behavior patterns, and competitive landscapes. This information helps VCs assess market opportunities and potential risks associated with a particular investment.

Portfolio Management: AI-powered dashboards and predictive analytics can help VCs track the progress and performance of their portfolio companies, identify areas for improvement, and make data-driven decisions regarding resource allocation and strategy adjustments.

Risk Assessment: By leveraging historical data, AI models can identify patterns and signals that may indicate risks related to financial stability, operational efficiency, or market dynamics. This enables VCs to proactively address risks and take appropriate measures to protect their investments.

Investor Relations: AI-powered chatbots and virtual assistants can enhance communication and engagement with limited partners (LPs).

It’s important to note that while AI can bring significant benefits to venture capital, human expertise and judgment remain essential throughout the investment process. AI should be seen as a tool to augment and assist VC professionals rather than replace them.

I wish I had AI tools to help me and my team do that Back in the days when I started Acelera Partners, a post-accelerator and micro-VC that invested in AI startups!  Hopefully for the next fund I will be raising that will be one of the main areas I will focus on so that VCs can also reap the benefits of AI for better outcomes of their own activities – “walk the talk”!

Regarding APIs (Application Programming Interfaces): they play a crucial role in enabling interoperability and creating new and improved ways of operating across various sectors. Here are some use cases and applications where the integration of APIs is driving innovation and transforming traditional practices:

  1. E-commerce and Retail: API integrations have revolutionized the e-commerce and retail industry by enabling seamless connections between different systems. For example:
  • Payment Gateway APIs allow businesses to securely process online transactions and accept various payment methods.
  • Shipping APIs enable real-time tracking and logistics management, improving order fulfillment and customer experience.
  1. Fintech and Open Banking: APIs are reshaping the financial services landscape by promoting interoperability and enabling new services. Key applications include:
  • Open Banking APIs allow secure sharing of financial data between banks and authorized third-party providers, empowering users with better financial insights and enabling innovative services. This is a very hot area for banco BV, for example.
  • Payment APIs enable easy integration of payment processing into applications, facilitating smooth and secure transactions.
  • Investment APIs provide developers access to stock market data, trading capabilities, and investment tools, fostering the development of investment platforms and robo-advisory services.
  1. Healthcare and Telemedicine: APIs are transforming healthcare by facilitating data exchange, interoperability, and telemedicine services. Examples include:
  • Electronic Health Record (EHR) APIs enable secure and standardized sharing of patient health data across different healthcare systems, improving care coordination and interoperability.
  • Telemedicine APIs integrate video consultations, appointment scheduling, and patient management systems, enabling remote healthcare delivery and telehealth applications.
  • Health and Fitness APIs connect wearable devices, mobile apps, and health monitoring platforms, providing users with personalized health insights, and encouraging preventive care. I used that back in 2008 when I was the global product manager for Microsoft HealthVault (we were really early in the game and learned a lot!).

These are just a few examples highlighting how API integrations are driving innovation and creating new and improved ways of operating across industries. APIs foster interoperability, collaboration, and the development of novel services and applications, ultimately enhancing user experiences and unlocking new opportunities for businesses.

Q: Eleanor Wright: Will bias in venture capital lead to bias in AI?

 

[Beny Rubinstein]: I can share insights on the relationship between bias in venture capital and bias in AI from a few different vantage points.  First, as someone who was employee number 6 in Microsoft Cloud & AI (Azure) global business development team. Second, as a venture capitalist with extensive experience in early-stage startups and AI investments (I founded and led the first partner of Microsoft Ventures in Latin America and invested in a dozen Israeli AI startups accelerated by Microsoft in Israel).  I actively engage with organizations like Women in Tech where I will be on a panel on July 3rd in Tel Aviv addressing “How AI empowers diversity & inclusion by eliminating biases, fostering fair decision-making, and creating equitable opportunities for all”.

While bias can exist within the venture capital industry, it does not necessarily directly translate into bias in AI. However, it’s important to recognize that venture capital plays a crucial role in shaping the development and deployment of AI technologies. The biases present within the venture capital ecosystem, such as unconscious biases in investment decision-making, can indirectly impact the diversity and inclusivity of AI innovations.

When venture capitalists predominantly invest in startups led by individuals from specific demographics or with similar backgrounds, it can result in a lack of diversity in the teams developing AI technologies. This limited representation can inadvertently lead to biases in the data used to train AI models and the design decisions made during their development.

Bias in AI can emerge from various sources, including biased training data, algorithmic design choices, and the social and cultural context in which AI is deployed. Therefore, addressing bias in AI requires a comprehensive approach that encompasses not only venture capital but also diverse representation in AI research and development teams, inclusive data collection and labeling processes, rigorous testing and validation, and ongoing ethical considerations.

In summary, while bias in venture capital does not directly lead to bias in AI, it can indirectly influence the diversity and inclusivity of AI technologies. Addressing bias in AI requires a multifaceted approach, including diverse representation in AI development teams, inclusive data practices, and ongoing ethical considerations, in which venture capitalists can play a significant role by promoting diversity in their investment decisions and supporting initiatives aimed at addressing bias in AI.

Q: Arvind Punj: Can anyone comment on the change in the education system which is needed because of the LLM models impacting learning?

 

[Beny Rubinstein]: Absolutely! This is an incredibly fascinating and crucial topic for the future of society!  The rise of LLMs has the potential to bring significant changes (and improvements!) to the education system. These models can provide access to vast amounts of information and assist in automating certain tasks traditionally performed by educators, such as grading and content generation. They also have the potential to personalize learning experiences, offering tailored feedback and adaptive resources to individual students.

However, the integration of LLMs into education also requires rethinking the role of human educators and the need for a balanced approach. While LLMs can provide valuable assistance, they cannot fully replace the essential aspects of human interaction, mentorship, and emotional intelligence that educators bring to the learning process.  To leverage the benefits of LLMs while mitigating their limitations, the education system must adapt. This adaptation may involve integrating LLMs as tools to support educators, emphasizing critical thinking, problem-solving, and creativity in the curriculum, and focusing on developing skills that are uniquely human and complementary to AI capabilities (after all, those will be the skills required soon; I suggest you refer to World Economic Forum’s “Future o Jobs Report 2023” for more details).

Furthermore, attention should be given to ethical considerations surrounding the use of LLMs in education, such as data privacy, algorithmic biases, and the potential for widening educational inequalities. Safeguards and guidelines should be put in place to ensure responsible and equitable deployment of LLMs in educational settings.  In summary, the integration of LLMs into the education system has the potential to revolutionize learning experiences, but careful thought and planning are required to strike the right balance between technology and human involvement, address ethical concerns, and ensure equitable access to education in the age of AI.  My former Professor at the University of Pennsylvania’s prestigious Wharton School of Business, Ethan Mollick, has been doing fascinating work on that space and is not only allowing his students to use ChatGPT, they are required to (see his interview to NPR here: ‘Everybody is cheating’: Why this teacher has adopted an open ChatGPT policy : NPR)

Q: Nanjun Li: Do you think the world will be more divided as generative AI deepening the division of labour?

 

The impact of generative AI on the division of labour and potential division within societies is a topic of debate among experts. While it is challenging to predict the exact outcome, here are two perspectives on the subject:

  • Potential for Division: Some experts argue that the advancements in generative AI could deepen the division of labour in society. As AI technologies become more capable of performing complex tasks, there is a possibility of job displacement in certain industries. This could lead to a division between those who have the skills and capabilities to adapt to the evolving job market and those who do not, potentially exacerbating existing inequalities.
  • Potential for Convergence: On the other hand, some experts believe that generative AI has the potential to converge rather than divide societies. They argue that while AI may automate certain tasks, it can also augment human capabilities and create new opportunities. AI technologies can assist humans in performing complex tasks, enabling them to focus on higher-value work that requires creativity, critical thinking, and interpersonal skills. This could lead to a more inclusive and collaborative labour market, where individuals can contribute their unique strengths and expertise.

It is important to note that the impact of generative AI on the division of labour will depend on various factors, including the pace of AI adoption, the availability of reskilling and upskilling opportunities, government policies, and societal responses. To mitigate potential divisions, efforts such as investing in education and skills development, promoting inclusive AI adoption, and implementing supportive policies can play a crucial role in ensuring that the benefits of AI are accessible to a broader segment of the population.

Ultimately, whether the world becomes more divided or more convergent because of generative AI will depend on the choices made by individuals, organizations, and societies as they navigate the opportunities and challenges presented by AI technologies.  I recommend the Ted Talk “How we can face the future without fear, together” from Rabbi Lord Jonathan Sacks (Z”L) from 2017 on that topic which has more than 2M views as it’s a societal issue more than a technological issue (he is also author of best seller “Morality: Restoring the Common Good in Divided Times” which is extremely helpful to understand the background and context of some the dilemmas we are facing now).

Q: Boris Bend: Thinking beyond the atypical: How do you see the world changing once true AGI will be achieved, and when do you personally expect that this may be achieved? (There are quite a few experts that expect that this could happen much faster than most people believe due to the current exponential progress of AI research.)

 

[Beny Rubinstein]: Artificial intelligence can be broadly categorized into three main types: artificial narrow intelligence (ANI), artificial general intelligence (AGI) and artificial superintelligence (ASI). Amongst these, AGI positions artificial intelligence at par with human capabilities. As a result, AGI systems can think, comprehend, learn, and apply their intelligence to solve problems much like humans would for a given situation.  In simpler words, if AGI is achieved, machines would be capable of understanding the world at the same capacity as any human being.

Regarding the timeline for achieving true AGI, opinions vary among experts. Some experts believe that AGI could be achieved sooner than anticipated due to the rapid progress in AI research, while others believe it may still be several decades away. Accurately predicting the timing of AGI is challenging due to the complexity of the problem and the many uncertainties involved.  The next decade will play a crucial role in accelerating the development of AGI. In fact, experts believe that there is a 25% chance of achieving human-like AI by 2030.  However, while there have been significant advancements in narrow AI domains, achieving AGI requires overcoming several technical hurdles, such as building robust generalization capabilities, addressing ethical considerations, and ensuring safety measures are in place.  Personally, I think it will take at least a decade to get there, if not more, due to those several challenges In the way of Artificial General Intelligence, but it could happen sooner with humans organize themselves better and collaborate more efficiently!

The development of AGI has the potential to bring about profound changes in various aspects of society. Here are some potential areas of impact:

  • Automation and Labour: AGI could replace certain manual and cognitive tasks, leading to shifts in employment patterns and the need for upskilling and retraining.
  • Scientific Advancements: AGI could accelerate scientific research and discovery by analyzing vast amounts of data, identifying patterns, and generating hypotheses. It may facilitate breakthroughs in areas such as medicine, climate change, and fundamental sciences.
  • Socioeconomic Considerations: Achieving AGI raises important socioeconomic questions, including distribution of wealth, access to technology, and ethical considerations surrounding AI decision-making and control.
  • Human-Machine Collaboration: AGI could enable more effective collaboration between humans and machines, augmenting human capabilities in decision-making, creativity, and problem-solving.

Q: Marufa Bhuiyan: Based on the data and investment, Which country is the AI capital of the world?

 

[Beny Rubinstein]: The landscape of AI dominance is evolving, with several countries making significant contributions. The United States has long been considered the AI capital of the world, with a robust ecosystem, substantial investments, and leading tech companies. However, recent developments highlight China’s emergence as a strong contender, with significant investments, a focus on AI research, and a national strategy to become a global leader. Israel, on the other hand, has gained prominence in AI startups and innovation, benefiting from a thriving tech ecosystem and strong research and development efforts. Sam Altman, CEO of Microsoft-backed OpenAI and ChatGPT creator took part in a talk at Tel Aviv University in Tel Aviv, Israel on June 5, 2023, and spurred some ideas on how to establish a national policy and strategy for the use and development of AI during his meeting with the country’s President Isaac Herzog (bear in mind that over 400 multinational organizations have R&D Centers in Israel and a big chunk of AI innovation/development for the “Big Tech” – Microsoft, Google, Amazon, META – is already done in Israel).  Other countries like Canada and the United Kingdom are also making noteworthy contributions to the AI landscape. While the United States, China, and Israel currently hold key positions, the competition in AI remains dynamic, with various countries vying for leadership in this rapidly advancing field.

According to the 2019 AI Index Report, published by the Stanford Institute for Human-Centered Artificial Intelligence in California, it is estimated that global private investment in AI in 2019 was more than US$70 billion. The US, China and Europe took the largest share; Israel, Singapore and Iceland were found to invest heavily in per capita terms. Start-ups founded on AI technologies are a major part of the ecosystem, garnering more than $37 billion globally in investments in 2019, up from $1.3 billion raised in 2010, according to the report. (Source: The race to the top among the world’s leaders in artificial intelligence (nature.com))


About Beny Rubinstein:

Beny serves as the Head of Banco BV in Israel and is a Strategic Advisor for Evolution.inc, an AI-for-AI Generator of AI systems. With an MBA from The Wharton School, and as a founding member of Microsoft Cloud & AI (Azure), Beny’s wealth of experience in the field is unparalleled. He’s committed to amplifying the legacy and impact of wealth creators around the world and helping them live a meaningful life.

Der Beitrag Unlocking the Generative AI Investment Frontier: Expert Q&A – Part 1 erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
122433
Charlette N’Guessan https://swisscognitive.ch/person/charlette-nguessan/ Thu, 23 Feb 2023 21:48:56 +0000 https://swisscognitive.ch/?post_type=cm-expert&p=121188 AI practitioner with experience in leading product development of AI models solutions in Africa. She is an award winner with great passion for Human-Centered AI.

Der Beitrag Charlette N’Guessan erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
Charlette works as an AI consultant for the African Union High-Level Panel on Emerging Technologies (APET by AUDA-NEPAD) and as a BI & Data science product manager on data-driven Agritech projects in Ghana.
In 2018, she co-founded and led the product development of BACE API, a secure remote identity verification software using facial recognition to combat online identity fraud and strengthen the digital identity system in Africa.
She is also a co-author of Volume IV of “The rising to the Top” by IFEES and GEDC. And she has earned accolades like being among the 80 African women advancing artificial intelligence in Africa and globally, the first African woman to win the Africa prize Engineering Award and the Excellence Award disruptive technology awarded by CISO in 2022. Main industries of focus: Technology, Finance, Education & Agriculture. Main expertise: Artificial intelligence, AI Ethics, Facial recognition, AI product development

Der Beitrag Charlette N’Guessan erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
121188
The Generative AI Chatbot Hype Continues to Grow: ChatGPT and Bard – SwissCognitive AI Radar https://swisscognitive.ch/2023/02/08/the-generative-ai-chatbot-hype-continues-to-grow-chatgpt-and-bard-swisscognitive-ai-radar/ Wed, 08 Feb 2023 04:44:00 +0000 https://swisscognitive.ch/?p=121039 News of ChatGPT and Google's AI Chatbot "Bard" is spreading like wildfire. Conversational AI is likely to revolutionize our lives.

Der Beitrag The Generative AI Chatbot Hype Continues to Grow: ChatGPT and Bard – SwissCognitive AI Radar erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
News of ChatGPT and Google’s newly unleashed AI Chatbot “Bard” is spreading recklessly. Without a doubt, conversational AI will revolutionize our lives in numerous ways.

 

“The Generative AI Chatbot Hype Continues to Grow: ChatGPT and Bard  – SwissCognitive AI Radar Perspective”


 

ChatGPT, the generative AI chatbot, continues to make waves across the digital landscape. The artificial intelligence system has been designed to facilitate human-computer conversations in natural language by using deep learning algorithms and structured data sets.

As its popularity increased, so too did speculation surrounding Google’s plans to enter this already hot market with their own revolutionary conversational AI platform called “Bard”. The speculations came into reality this Monday when Google CEO Sundar Pichai introduced “Bard”, the experimental conversational AI service powered by their Language Model for Dialogue Applications (LaMDA). Developed to provide fresh and high-quality responses based on information from the web, “Bard” will be made available to the public in the coming weeks with a smaller model that requires less computing power. Also, Google are using these new advancements within Search while expanding access through API so that other developers can create innovative applications with AI.

Experts and stakeholders have raised several questions in light of these events, speculating about the potential outcome of the competition between these huge tech giants.

Google is holding a virtual event today to showcase how it’s utilizing AI to revolutionize the way people search, explore and interact with information, making it more effortless to find what they need. The 40-minute presentation will be streamed on YouTube at 2:30 PM CET / 8:30 AM EST.

This week’s SwissCognitive AI Radar gathered together all these articles and sources to provide an overview of the exciting and likely landmark moments.

Der Beitrag The Generative AI Chatbot Hype Continues to Grow: ChatGPT and Bard – SwissCognitive AI Radar erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
121039