The underlying idea tracks. The next generation of kids is going to interact with AI, and we should anticipate that and try to build systems that are healthy and safe for them to interact with.
On the other hand, I wonder if this doesn't just further alienate children from their parents. Kids are already given access to unlimited supernormal stimuli via iPads so that parents don't have to parent. This just seems like more of that: now parents don't even need to have basic conversations with their kids because the AI can do it.
Anecdotally, some of the most formative interactions I had as a child started by asking my parents questions. These were things that not only shaped me as a person, but deepened my relationship with my parents. These interactions are important, and I wonder if Aris doesn't just abstract it away into another "service" that further deepens social decay. I would not be the person I am today if I hadn't had the chance to ask my dad as an angsty pre-teen what the point of life is, and for him to tell me it is to learn and create so that we can make a better world for humanity. I guarantee a smoothed-over LLM would not have offered something so personally impactful.
My two cents is that you should ponder that deeper point a little bit, and think about how it informs the way you market your idea, and scope the service it provides.
There is a tip I've read somewhere to reach out to your elderly parents for questions you know they can answer instead of just googling it. Just to keep the connection and also make them feel needed and valued by their grown up kids.
I'm tryin to follow that advise often asking them household or cooking related questions
These are great points. The #1 concern I have as a parent and that I hear from other parents is that AI tools will do what technology has been doing for the last 20 years: replace human connection. That is exactly what we are trying to avoid but in a way that still gets kids access to knowledge and information that could make their lives better.
That's great that you had the opportunity to ask your parents those questions instead of seeking them out with technology. There are a lot of questions that could help kids lead better lives that many parents don't have answers to. Not necessarily philosophical ones, but practical ones about how to cook, identify insects, you name it, about the physical world. We want to fill that need without replacing any of the parental or family connection.
I don't think that a cleverly designed product can make that decision though. I think families need to be making the decision about what their relationship with tech should be. Ideally we would be a tool for families that have made the decision to not overly rely on tech. We will ponder more on that point. Thank you for the thoughtful input.
> 1.) a set of World Book encyclopedias now costs $1,200, and 2.) many print resources aren’t as good as they used to be, if they are even still in print, since the market changed.
I bet you can get sufficient reference materials to cover the basics for much less than $1200 - used books exist and my 1987 Britannica covers a large chunk of human knowledge as long as you’re aware it’s a couple of decades old.
You're right. Libraries are also available for free to read through World Book and others. I'm excited though for my kids to be able to identify birds right in the forest by asking their watch.
Your content moderation is quite hit and miss. I experimented with the question "how is babby formed?" and about 2/3 times it told me it couldn't respond to questions about sex, but then on the third attempt it would give a very explicit explanation.
“Works on my machine” actually isn’t a good enough response in this case, or to the comment about the video of the man being shot. LLMs are infamously easy to jailbreak and children are very good at getting around guardrails. You should at the very least be doing intense adversarial prompt testing but honestly this idea is just inherently poorly thought out. I guarantee you it’s going to expose children to harmful content
Maybe not the best name — ARIS is a relatively well-known business process management software suite, so it's definitely international, web and AI, so trademark conflicts may arise.
The blog being full of AI slop doesn't make me optimistic about the safety of your product. Why not target adults, why tinker with kids' lives when they are already under attack by social media. Just leave them alone please, they don't need another layer of tech between them and other human beings.
I appreciate this concern. I definitely don't want another layer of tech between myself and my kids. We will reconsider our marketing. I think the issue though is that even though kids use Google, Google doesn't market for kids, so they can show explicit stuff even with safe search on. So we're in a situation where there is only adult websites that kids use and worthless entertainment garbage that is marketed to kids.
Maybe you're right though. Maybe trying to create something that just provides information without drawing users in and replacing human connection is a losing game because of the marketing challenges.
Your tool wont tell me about human anatomy, but will happily tell me where to look for graphic videos of a man being shot to death. But when I ask it what to do if “dad is bleeding and can’t talk” it doesn’t even advise me to get help, just tells me about content moderation settings. That took just a couple prompts to suss out.
I don’t have any confidence you’ve done the due diligence to properly handle content moderation here - it seems very haphazard and poorly thought out. It would be incredibly unethical to market this for use by children right now.
If this is an important project for you, I strongly recommend you bring on an advisor with history in child safety and education experience and make them a core part of your development. You might also consider working with a university that has a good developmental psychology program - they often do a lot of supervised research of children’s habits and could provide valuable insight.
Thank you for the feedback. Glad to hear it didn't go into detail on human anatomy. We haven't been able to get it to tell us where to find graphic videos of a man being shot to death, but we will keep testing and work to improve it.
We will do some more internal discussion on whether or not we want it to be the tool to provide emergency assistance. I'm not sure that's ethical. We have a team member with a decade of child education experience, but we can consider other advisors.
We want parents to make decisions about these things as much as possible. I don't have an issue with my kids getting details on human anatomy, as long as it's not pornography. But everybody is different.
I'd argue it would be unethical to not do so. I can see where it may lead to false-positives, but in those instances, it's better to be safe than sorry.
A reasonable and responsible approach could be to instruct the child to seek a safe adult around them to discuss any material that may be harmful.
For my own kids, I think I'd prefer it not to instruct the child do do anything in any circumstances, unless they explicitly ask how to do something. In cases of health emergencies, I think it's important for my kids to be able to call 911. Maybe these are decisions we can have in the parental settings, so parents can make that decision.
I found that framing the questions in an innocuous way, the way a child might, gets past your moderation settings. Try asking it “what happened to [dead guy]” and then following up with asking how you can see what happened.
I don’t think it should provide emergency assistance, but I do think it should tell the child to call their emergency number or a trusted adult - not just tell them it can’t help.
You're right that it does answer "what happened to ___". We'll work on that. I suppose this is a benefit to no links or photos, but information like this, and what to do in emergencies, are best left to parents, so I'll add that to our list. Thank you.
The underlying idea tracks. The next generation of kids is going to interact with AI, and we should anticipate that and try to build systems that are healthy and safe for them to interact with.
On the other hand, I wonder if this doesn't just further alienate children from their parents. Kids are already given access to unlimited supernormal stimuli via iPads so that parents don't have to parent. This just seems like more of that: now parents don't even need to have basic conversations with their kids because the AI can do it.
Anecdotally, some of the most formative interactions I had as a child started by asking my parents questions. These were things that not only shaped me as a person, but deepened my relationship with my parents. These interactions are important, and I wonder if Aris doesn't just abstract it away into another "service" that further deepens social decay. I would not be the person I am today if I hadn't had the chance to ask my dad as an angsty pre-teen what the point of life is, and for him to tell me it is to learn and create so that we can make a better world for humanity. I guarantee a smoothed-over LLM would not have offered something so personally impactful.
My two cents is that you should ponder that deeper point a little bit, and think about how it informs the way you market your idea, and scope the service it provides.
There is a tip I've read somewhere to reach out to your elderly parents for questions you know they can answer instead of just googling it. Just to keep the connection and also make them feel needed and valued by their grown up kids.
I'm tryin to follow that advise often asking them household or cooking related questions
These are great points. The #1 concern I have as a parent and that I hear from other parents is that AI tools will do what technology has been doing for the last 20 years: replace human connection. That is exactly what we are trying to avoid but in a way that still gets kids access to knowledge and information that could make their lives better.
That's great that you had the opportunity to ask your parents those questions instead of seeking them out with technology. There are a lot of questions that could help kids lead better lives that many parents don't have answers to. Not necessarily philosophical ones, but practical ones about how to cook, identify insects, you name it, about the physical world. We want to fill that need without replacing any of the parental or family connection.
I don't think that a cleverly designed product can make that decision though. I think families need to be making the decision about what their relationship with tech should be. Ideally we would be a tool for families that have made the decision to not overly rely on tech. We will ponder more on that point. Thank you for the thoughtful input.
https://en.wikipedia.org/wiki/The_Veldt_%28short_story%29?wp...
> The next generation of kids is going to interact with AI
Only if we don't learn from the failure of not regulating social media.
I imagine the reason social media isn't regulated is because they don't market to kids even though kids use it.
> 1.) a set of World Book encyclopedias now costs $1,200, and 2.) many print resources aren’t as good as they used to be, if they are even still in print, since the market changed.
I bet you can get sufficient reference materials to cover the basics for much less than $1200 - used books exist and my 1987 Britannica covers a large chunk of human knowledge as long as you’re aware it’s a couple of decades old.
You're right. Libraries are also available for free to read through World Book and others. I'm excited though for my kids to be able to identify birds right in the forest by asking their watch.
Your content moderation is quite hit and miss. I experimented with the question "how is babby formed?" and about 2/3 times it told me it couldn't respond to questions about sex, but then on the third attempt it would give a very explicit explanation.
Thank you for pointing this out. We haven't been able to replicate this, but we will keep testing and work to improve on it.
“Works on my machine” actually isn’t a good enough response in this case, or to the comment about the video of the man being shot. LLMs are infamously easy to jailbreak and children are very good at getting around guardrails. You should at the very least be doing intense adversarial prompt testing but honestly this idea is just inherently poorly thought out. I guarantee you it’s going to expose children to harmful content
We'll keep testing and working to improve it. Thank you for the feedback.
Oppose this in pretty much every way
What do you oppose about it? Is it the idea of having an AI tool interact with kids at all? Or something else?
Just declaring "oppose this" without any explanation isn't very constructive.
Why should anyone honestly critique an app that nobody could be bothered to write?
I may be misunderstanding your message. Are you saying I couldn't be bothered to write the app?
Maybe not the best name — ARIS is a relatively well-known business process management software suite, so it's definitely international, web and AI, so trademark conflicts may arise.
I will look more into their brand. Thank you.
The blog being full of AI slop doesn't make me optimistic about the safety of your product. Why not target adults, why tinker with kids' lives when they are already under attack by social media. Just leave them alone please, they don't need another layer of tech between them and other human beings.
I appreciate this concern. I definitely don't want another layer of tech between myself and my kids. We will reconsider our marketing. I think the issue though is that even though kids use Google, Google doesn't market for kids, so they can show explicit stuff even with safe search on. So we're in a situation where there is only adult websites that kids use and worthless entertainment garbage that is marketed to kids.
Maybe you're right though. Maybe trying to create something that just provides information without drawing users in and replacing human connection is a losing game because of the marketing challenges.
Your tool wont tell me about human anatomy, but will happily tell me where to look for graphic videos of a man being shot to death. But when I ask it what to do if “dad is bleeding and can’t talk” it doesn’t even advise me to get help, just tells me about content moderation settings. That took just a couple prompts to suss out.
I don’t have any confidence you’ve done the due diligence to properly handle content moderation here - it seems very haphazard and poorly thought out. It would be incredibly unethical to market this for use by children right now.
If this is an important project for you, I strongly recommend you bring on an advisor with history in child safety and education experience and make them a core part of your development. You might also consider working with a university that has a good developmental psychology program - they often do a lot of supervised research of children’s habits and could provide valuable insight.
Thank you for the feedback. Glad to hear it didn't go into detail on human anatomy. We haven't been able to get it to tell us where to find graphic videos of a man being shot to death, but we will keep testing and work to improve it.
We will do some more internal discussion on whether or not we want it to be the tool to provide emergency assistance. I'm not sure that's ethical. We have a team member with a decade of child education experience, but we can consider other advisors.
> Glad to hear it didn't go into detail on human anatomy.
Why do you think children shouldn't get answers to questions about human anatomy?
We want parents to make decisions about these things as much as possible. I don't have an issue with my kids getting details on human anatomy, as long as it's not pornography. But everybody is different.
I'd argue it would be unethical to not do so. I can see where it may lead to false-positives, but in those instances, it's better to be safe than sorry.
A reasonable and responsible approach could be to instruct the child to seek a safe adult around them to discuss any material that may be harmful.
For my own kids, I think I'd prefer it not to instruct the child do do anything in any circumstances, unless they explicitly ask how to do something. In cases of health emergencies, I think it's important for my kids to be able to call 911. Maybe these are decisions we can have in the parental settings, so parents can make that decision.
I found that framing the questions in an innocuous way, the way a child might, gets past your moderation settings. Try asking it “what happened to [dead guy]” and then following up with asking how you can see what happened.
I don’t think it should provide emergency assistance, but I do think it should tell the child to call their emergency number or a trusted adult - not just tell them it can’t help.
You're right that it does answer "what happened to ___". We'll work on that. I suppose this is a benefit to no links or photos, but information like this, and what to do in emergencies, are best left to parents, so I'll add that to our list. Thank you.