12 comments

  • throwaway314155 10 hours ago
    > In the US, about 1/5 children are hospitalized each year that don’t have a caregiver. Caregiver such as play therapists and parents’ stress can also affect children's emotions.

    Trust me, large language models are not anywhere close to being able to substitute as an effective parent, therapist, or caregiver. In fact, I'd wager any attempts to do so would have mostly _negative_ effects.

    I would implore you to reconsider this as a legitimate use case for your open device.

    > We believe this is a complement tool and it is not intended to replace anyone.

    Well which is it? Both issues you list heavily imply that your tool will serve as a de facto replacement. But then you finish by saying you don't intend to do that. So what aspects of the problems you listed will be solved as a simple "complement tool"?

    • szundi 10 hours ago
      I had a quite good social sciences teacher.

      I never forget one of his remarks: There can be only one thing that is worse than someone not having a mother - that he has one.

      So maybe a chatty LLM is not the worse thing that can happen with someone.

    • zq2240 10 hours ago
      Like in pederatic care, not every child has a parent who takes good care of them. In hospitals, it is more often play therapists who do this work, but their negative stress can also affect children's emotions. For example, some children feel very traumatized before doing line placement/blood test. This tool can help explain the specific process to them using empathic language and encourage them on specific topics.

      I mean doctors and play therapists still have to do their job, We have interviewed some doctors who feel particularly frustrated about how to comfort children before tests or surgeries. They hope for a tool can help building comfort for kids -> which means time is faster to run tests.

    • fragmede 10 hours ago
      > Trust me, large language models are not anywhere close to being able to substitute as an effective parent, therapist, or caregiver.

      You're asking us to trust you, but why should we trust you in this matter? Regardless of if I think ChatGPT is any good at those things, you'd need some supporting evidence for that one way or another before continuing.

      • throwaway314155 9 hours ago
        It's an expression. In this context I just meant "it should be obvious". Maybe try steel-manning my argument first. If you really can't see why that's likely the case after using a LLM yourself, then I'll be happy to admit that I'm making an emotional argument and you're in no way required to "trust me".
        • eddd-ddde 9 hours ago
          Honestly I don't see it as an "obvious" thing.

          I won't be surprised if in a couple more years this kind if thing is the norm. I don't think there's anything inherently different from a person that listens to you.

        • fragmede 9 hours ago
          https://chatgpt.com/share/6701aab3-2138-8009-b6b8-ec345b4382...

          Why is that "not anywhere close to being able to substitute as an effective parent, therapist, or caregiver."?

          Maybe I've had a bad parents/therapists/caregivers all my life, but it seems like an entirely reasonable response. If there's a more specific scenario you'd like me to pose and show me that it's advice is no good, I'm happy to ask it.

    • moralestapia 9 hours ago
      >I would implore you to reconsider this as a legitimate use case for your open device.

      OP, I would implore you to not listen to any of this "advice" at all and just keep on building really nice things.

      I can already think of a dozen valuable applications of it in a therapheutic context.

      Ignore those who don't "do".

      • brailsafe 9 hours ago
        > Ignore those who don't "do".

        I'm actually pretty ok with ignoring those who don't "think" before they "do", not that the OP is one of those people, but "doing" as a mark of virtue seems fairly likely destructive

  • gcanyon 8 hours ago
    I was a solo latchkey kid from age... 5 or 6 maybe? I developed a love of reading and spent basically all my waking hours that weren't forcibly in the company of others doing that, by myself: summertime in San Diego, teenage me read 2-4 books a day. I grew up to be incredibly introverted (ironic that I work as a product manager, which strongly favors extroverts) and I wonder how differently I might have turned out if a digital companion had urged me to be more social (something my parents never did), or just interacted with me on a regular basis.
  • echoangle 11 hours ago
    I don't want to criticize a cool project but why do people feel the need to create new hardware for AI things? It was the same thing with the rabbit r1. Why do I need a device that contains a screen, a microphone and a camera? I have that, it's called a smartphone. Being a bit smaller doesn't really help because I have my phone with me almost all the time anyways. So it's actually more annoying to carry the phone and the new device instead of just having the phone. I would be happy with it just being an app.
    • suriya-ganesh 11 hours ago
      I can answer to this, having worked on an assistant that is always on, from your phone.

      The platforms (ios, Android, etc.) are very limiting. It is hard to have something always on and listening. Especially apple is aggressive with apps running in the background.

      You need constant permissioning and special privileges. The exposed APIs themselves are not enough to build deep and stable integrations to the level of Siri/Google Assistant.

      • echoangle 11 hours ago
        Oh, I didn't get that it's supposed to always be listening. Maybe I'm not the target audience but I wouldn't want that anyways. If that's important, that might be a good reason. I think this needs to change in the future though if AI agents are supposed to become popular, I can't imagine buying separate hardware every time. Either the integration in the OS needs to become better or Google/Apple will monopolize the market and be the only options.
        • jsheard 11 hours ago
          > Oh, I didn't get that it's supposed to always be listening. Maybe I'm not the target audience but I wouldn't want that anyways.

          I don't know about this project, but generally when a voice assistant is "always listening" they mean it's sitting in a low power state listening for a very specific trigger like "Hey Siri" or "OK Google" and literally nothing else. As much as they would probably like to have it really listening all the time, the technology to have a portable device run actual speech recognition and parsing at all times with useful battery life doesn't really exist yet.

          • joeyxiong 11 hours ago
            You are right, “always listening" they mean it's sitting in a low power state listening for a very specific trigger like "Hey Siri" or "OK Google" and literally nothing else.
          • echoangle 10 hours ago
            Yes, I thought it was button-triggered.
          • fragmede 10 hours ago
            Yes it does. A Nvidia Jetson Nano with a microphone running Whisper with a banana sized battery will give you 8 hours of transcription.
      • xnx 11 hours ago
        If you have a separate -dedicated- Android smartphone for this task, why wouldn't the app run in the foreground?
    • explorigin 11 hours ago
      I don't think you're their target audience. I'd love something like this for my kid (who isn't ready for a smartphone).

      Other problems are persistence. Have you looked at how hard it is to keep an app running in the background on an iPhone? on a Samsung phone? For an app that needs to be always-on, it's a non-starter unless you're Apple or Google respectively.

    • dmitrygr 9 hours ago
      Apple would stop you from scooping up all that delicious delicious data. Google probably would too. Always-on listening requires building e-waste.
    • moralestapia 9 hours ago
      >why do people feel the need to create new hardware for AI things?

      Because people have agency and hobbies, and they're free to decide what to spend their money and time on.

  • aithrowawaycomm 9 hours ago
    This seems to be yet another reckless and dishonest scam from yet another cohort of AI con artists. From starmoon.app:

    > With a platform that supports real-time conversations safe for all ages...Our AI platform can analyse human-speech and emotion, and respond with empathy, offering supportive conversations and personalized learning assistance.

    These claims are certainly false. It is not acceptable for AI hucksters to lie about their product in order to make a quick buck, regardless of how many nice words they say about emotional growth.

    Do you have a single psychologist on your staff that signed off on any of this? Telling lies about commercial products will get you in trouble with regulators, and it truly seems like you deserve to get in trouble.

    • arendtio 8 hours ago
      Can you please elaborate on why this is 'certainly false'? What is missing?

      To me, it looks like you have some experience with the topic and believe that it is very hard to build something like the device in question, but which properties of the solution make you so certain?

  • allears 11 hours ago
    This tool requires a paid subscription, but it doesn't say how much. The hardware is affordable, but the monthly fees may not be. Also, the hardware is only useful as long as the company's servers are up and running -- better hope they don't go out of business, get sold, etc.
    • joeyxiong 11 hours ago
      Sorry for the confusion, we are still discussing the paid subscription pricing, but I can be sure that the price of premium subscription will not be higher than $9 per month.
  • jstanley 11 hours ago
    Personally I have found talking to AI to be much more draining than typing. It's a bit like having a phone call vs IM. I'd basically always prefer IM as long as I'm getting quick responses.
    • josephg 9 hours ago
      Since the new OpenAI voice model launched, I feel the opposite. Some of the responses me and my gf have gotten from it were fantastic. It’s really good at role play and using intonation now. And you can interrupt it halfway through a response if it’s going off track.

      For example, I spent 20 minutes the other day talking through some software architecture decisions for a solo project. That was incredible. No way I would have typed out my thoughts as smoothly.

    • willsmith72 11 hours ago
      I still use text most of the time (technical or complex problems, copy pasting materials...), but for things like language learning or getting answers while commuting/walking, voice is a no-brainer.
    • afro88 11 hours ago
      For the use case that this project is for?
    • zq2240 11 hours ago
      Yeah, I know your point. Compared with human communication, I think talk with AI can be self-paced.
  • butterfly42069 11 hours ago
    I think this is great, ignore the people comparing your project to the commercial Rabbit R1 project, those people are comparing apples and oranges.

    A lot of the subscription based pull ins could be replaced by networking into a machine running whisper/ollama etc anyway.

    Keep up the great work I say :)

  • deanputney 9 hours ago
    Is this specific hardware necessary? If I wanted to run this on a Raspberry Pi zero, for example, is that possible?
    • zq2240 9 hours ago
      Sorry, it currently support esp32-devkit and Seeed Studio Xiao ESP32S3. For the Raspberry Pi Zero, you may need to switch to a different PlatformIO environment and replace the corresponding GPIO pins.
  • napoleongl 9 hours ago
    I can see something like this being used in various inspection scenarios. Instead of an inspectior having to fill out a template or fiddle with an ipad-thingie in tight situations they can talk to this, and a LLM converts it to structured data according to a template.
  • vunderba 10 hours ago
    I predicted a Teddy Ruxpin / AG Talking Bear driven by LLMs a while ago. My biggest fear is that the Christmas toy of the year would be a mass produced always listening device that's effectively constantly surveilling and learning about your child, courtesy of Hasbro.
  • stavros 11 hours ago
    I'd love a hardware device that streamed the audio to an HTTP endpoint of my choosing, and played back whatever audio I sent. I can handle the rest myself, but the hardware side is tricky.
  • danielbln 11 hours ago
    Any plans for being able to run the entire thing locally with local models?
    • _joel 11 hours ago
      You can make an OpenAI compatible local server using LMStudio[1] and load any model you want. It'd have to be on another host though, the s3 has some inference capabilities with addons afair, but nowhere enough grunt to run locally at any usable token/s

      [1] https://lmstudio.ai/