Search company, investor...
Xiaoice company logo

Xiaoice

xiaoice.com

Founded Year

2020

Stage

Series B | Alive

Total Raised

$138.4M

Last Raised

$138.4M | 2 yrs ago

Mosaic Score
The Mosaic Score is an algorithm that measures the overall financial health and market potential of private companies.

-53 points in the past 30 days

About Xiaoice

Xiaoice develops artificial intelligence (AI)-enabled human-like speech services or chatbots for enterprises. Its use cases are in finance, retail, automobile, real estate, textile, and other areas. It was founded in 2020 and is based in Beijing, China. It is a spin-off from Microsoft.

Headquarters Location

Beijing, Beijing,

China

Loading...

Xiaoice's Product Videos

Product Demo.jpg

Xiaoice's Products & Differentiators

    N小黑

    Virtual financial anchor, can live 7×24 hours without interruption

Loading...

Expert Collections containing Xiaoice

Expert Collections are analyst-curated lists that highlight the companies you need to know in the most important technology spaces.

Xiaoice is included in 2 Expert Collections, including Unicorns- Billion Dollar Startups.

U

Unicorns- Billion Dollar Startups

1,244 items

A

Artificial Intelligence

14,769 items

Companies developing artificial intelligence solutions, including cross-industry applications, industry-specific products, and AI infrastructure solutions.

Latest Xiaoice News

Apple’s planned chatbot should have no ‘personality’

Sep 5, 2024

Credit: iStock/Tippapatt Apple is reportedly developing a new AI digital assistant expected to be integrated into its upcoming robotic devices. Based on generative AI (genAI) and more advanced than Siri, the new assistant will have a “human-like” AI “personality.” The new assistant could replace Siri on HomePod, iPhones, iPads, or Macs and, most likely and intriguingly, on a new robotic desktop screen that follows and faces you while interacting or while you’re using it for a FaceTime call,  according to Bloomberg’s Mark Gurman . Speech might be the main or sole interface. The prospect fills me with dread. The history of “personality” failures Personal computing’s past is littered with the virtual corpses of chatbots and assistants with “personality.” Microsoft, for example, has never stopped trying. In 1995, it introduced the Microsoft Bob assistant, which conspicuously tried too hard to be personable; users mostly found it condescending and irritating. Microsoft tried again in 1997 with Clippy, an anthropomorphic paper clip designed to have a personality. It landed like a thud, and critics slammed it for its irritating personality and intrusive interruptions. Microsoft engineers in China released the experimental Xiaoice (pronounced “Shao-ice,” meaning “Little Bing”) in 2014. The chatbot prioritizes “emotional intelligence” and “empathy.” It uses advanced natural language processing and deep learning to continuously improve its conversational abilities. Microsoft built Xiaoice on what the company calls an “Empathetic Computing Framework.” As of 2020, Xiaoice had attracted over 660 million active users globally, making it the world’s most popular personality chatbot. It’s been deployed on more than 40 platforms in countries such as China, Japan, and Indonesia, as well as previously in the US and India. Microsoft researchers modeled Xiaoice to present as a teenage girl, leading many Chinese users to form strong emotional connections with it. Disturbingly, some 25% of Xiaoice users have told the chatbot, “I love you,” with millions of users forming what they think is a “relationship” with Xiaoice — at the expense of pursuing relationships with other people. In 2016, Microsoft launched a chatbot called “Tay.” It was targeted at 18- to 24-year-olds and trained on social media posts, mainly Twitter. Within 24 hours of launch, the chatbot started posting racist, sexist, and anti-Semitic remarks and content favoring conspiracy theories and genocidal ideologies. (Again, trained on Twitter.) Microsoft apologized and pulled the plug on “Tay.” Other personality-centric chatbots have emerged over the years: Replika: An AI chatbot that learns from interactions to become a personalized friend, mentor, or even romantic partner. Critics have slammed Replika for sexual content, even with minors, and also for claiming bonkers experiences, such as seeing supernatural entities. Kuki (Mitsuku): Known for its conversational abilities, Kuki has won multiple Loebner Prize Turing Tests. It is designed to engage users in natural dialogues, but can also spout random nonsense. Rose: A chatbot with a backstory and personality developed to provide engaging user interactions, but the conversation is fake, inconsistent, and unrelated to previous conversations. BlenderBot: Developed by Meta, BlenderBot is designed to blend different conversational skills and engage users in meaningful conversations, but has tended to lie and hallucinate. Eviebot: An AI companion with emotional understanding capabilities designed to engage users in meaningful conversations. Responses can be cryptic, unsettling, and even manipulative. SimSimi: One of the earliest chatbots, SimSimi engages users in casual conversations and supports multiple languages, but can be vulgar and highly inappropriate. Chai AI: Allows users to create and interact with personalized chatbot companions, offering a stream of AI personalities based on user preferences. The chatbot has offended many users with sexualized or dark content. Inworld: Provides tools for users to create distinct personality chatbots, including those based on celebrities. This tool has often been used for creative, deceptive, and harmful personas. AIBliss: A virtual girlfriend chatbot that develops different characteristics as users interact. Experts have warned that, like Xiaoice, some users have obsessed over their relationship with the bot at the expense of real, human relationships. Pi in the sky Closer to home, AI chatbots vary in the degree to which they prioritize “personality.” You’ll find a chatbot called Pi at the maximum “personality” end of the spectrum. You can leave Pi running on your phone and start conversations with it whenever you like. The chatbot is chatty and conversational to the extreme. It also uses a lot of natural-sounding pauses, and it even takes breaths as it speaks. Most of the time, it will respond to your question or comment and end its wordy monologue with a question of its own. Pi comes with a variety of voices you can choose from. I pick voice #4, which sounds like a very California woman, complete with vocal fry. Though I’m amazed with Pi, I don’t use it much. While the voice is natural, the conversationality feels forced and tone-deaf. It just won’t shut up, and I end up just turning it off after the 10th question it asks. In truth, I want a chatbot that answers my questions, not one that tries to get me to answer its questions. Pi is also overly ingratiating, constantly telling me how insightful, thoughtful, or funny my inane responses are. Why, Apple? Why? I’m prepared to conclude that every single personality-centric chatbot ever produced has failed. So why does Apple think it can succeed? Many already dislike Siri because of how the company has implemented the assistant’s personality. Specific prompts can elicit corny jokes and other useless content. While writing this column, I asked Siri, “What are the three laws of robotics?” Its reply was: “Something about obeying people and never hurting them. I would never hurt anyone.” In this case, Siri responded with a canned personality instead of answering the question. This doesn’t always happen, but it’s an example of how Apple might approach its generative AI chatbot personality. I can’t imagine Apple thinks Siri’s personality is popular, nor do I believe the company has seen personality-focused chatbots in the wild and found something worth emulating. “Personality” in chatbots is a novelty act, a parlor trick, that can be fun for 10 minutes but then grates on the nerves after a few encounters. What we need instead of personality Natural, casual human conversation is far beyond the capacity of today’s most advanced AI. It requires nuance, compassion, empathy, subtly, and a capacity for perceiving and expressing “tone.” Writing a formal missive, letter, scientific paper, or essay is far, far easier for AI than casual chit-chat with a friend. Another problem is that personality chatbots are liars. They express emotions they don’t have, make vocal intonations based on thoughts they don’t have, and often claim experiences they never had. People don’t like to be lied to. What we need instead of a profane, inappropriate, ingratiating, boring liar is something useful. The human factor in elocution and tone should be calibrated to be unnoticeable — neither overly “real” nor robotic-sounding. If you can program for empathy, empathize with my situation and objectives, not my emotions. We want personalization, not personality. We want agency, not tone-deafness. We want a powerful tool that magnifies our abilities, not a “friend.” Who knows? Apple might surprise everyone with a popular “personality” robot voice that doesn’t repel or confuse people. But I doubt it. Nobody’s ever done it. Nobody should attempt it. Related content

Xiaoice Frequently Asked Questions (FAQ)

  • When was Xiaoice founded?

    Xiaoice was founded in 2020.

  • Where is Xiaoice's headquarters?

    Xiaoice's headquarters is located at Beijing.

  • What is Xiaoice's latest funding round?

    Xiaoice's latest funding round is Series B.

  • How much did Xiaoice raise?

    Xiaoice raised a total of $138.4M.

  • Who are the investors of Xiaoice?

    Investors of Xiaoice include Northern Light Venture Capital, NetEase Capital, GGV Capital, Hillhouse Capital Management, Neumann Advisors and 6 more.

  • What products does Xiaoice offer?

    Xiaoice's products include N小黑 and 3 more.

Loading...

Loading...

CBI websites generally use certain cookies to enable better interactions with our sites and services. Use of these cookies, which may be stored on your device, permits us to improve and customize your experience. You can read more about your cookie choices at our privacy policy here. By continuing to use this site you are consenting to these choices.