Peeking at the source, it's just a zero-width div, which is not accomodating of people with disabilities. This might open you up to litigation if you disqualify a blind person on the grounds he gave the wrong answer 'using AI', when he might have just been answering the question his screen reader read out.
There's an easier fix. Have the candidates state if they have a vision disability first and then send them down a different pathway for validation. There aren't that many, so it's not going to be costly or anything.
Instead of the zero-width div, you could set up an event listener for the copy event (using addEventListener() method) which calls .clipboardData.setData() on the ClipboardEvent to change it to your modified code.
That should avoid messing things up for people with screen readers while still trapping the copy+pasters.
I'm pretty sure the intent is to weed people out well before they get to a point where you could share a screen with them. He mentioned a few people "resubmitted the application", so sure this is probably an initial step.
What most interviews get wrong is that there are usually just a few "bullet points" that if you see them, you instantly know that the the candidate at least has the technical chops.
Instead of creating a test that specifically aims for those bullet points, many technical assessments end up with convoluted scaffolding when actually, only those key bullet points really matter.
Like the OP, I can usually tell if a candidate has the technical chops in just a handful of really straightforward questions for a number of technical domains.
If you have a test that can identify a good candidate quickly then you have honestly struck gold and can genuinely use that to start your own company. I mean this with absolute sincerity.
One of the absolute hardest part of my business is really hiring qualified candidates, and it's really demoralizing and time consuming and unbelievably expensive. The best that I've managed to do is the same that pretty much every other business owner says... which is that I can usually (not always) filter out the bad candidates (along with some false negatives), and have some degree of luck in hiring good candidates (with some false positives).
Good candidates are not universally good, they are good for you after hire.
One of the best business analysts I worked with (also a profession, mind you) was almost fired when working under an old, grumpy and clearly underskilled one.
I was hired once without interview into a unicorn, was loved by colleagues but hated the work, the business and the industry, then left rather quickly.
See? There are mismatches and unknown unknowns, not just bad or good developers.
yup, the worst performance i did on any job was due to the complete unavailability of a manager when i was a team of one. and then that manager would not even fire me. i had to quit to get out of there.
Yeah sourcing developers / collaborating with developers is a huge barrier of entry. More than others factors such as deciding what software to produce for the market
But along that thought, I’ve always held that a human conversation is the best filter. I’ll ask you what do you work on recently, what did you learn, what did you mess up, what did you hate about the tool / language / framework.
I strongly believe your ability to articulate your situation corresponds with your ability to do the job.
Take your current technical assessment and think about the types of responses or code submissions that really impressed you. What was special about them? What did you see in the response that drew a positive reaction from your team?
Can you re-frame your process or the prompt to only elicit those specific responses?
So instead of a whole exercise of building a React app or a whole backend API, for example, what would really "wow" you if you saw the candidate do it in the submission for a project? Could you re-frame your whole process so you only target those specific responses and elicit specific outputs?
Now that you've taken what was previously a 2 hour coding exercise (for example) and distilled down to 3-4 key questions, you can seek the same outputs in 15-30 minutes instead.
There are several advantages to this:
1) Many times, candidates know the answer but they actually can't figure out what you're looking for when there's a lot of cruft around the question/problem being solved. You can end up losing candidates that know how to solve the problem the way you want, but because of the way the question was posed, the objective is opaque.
2) It saves a lot of time for both sides. Interviewer doesn't have to review a big submission, candidate doesn't have to waste time doing a long project.
3) By condensing the cycle, you can evaluate more candidates and you can hopefully select a top candidate before they get another opportunity. You shorten the cycle time because the candidate doesn't have to find free time to do a project or sit down for long interviews, you don't need to have people review the code submissions, etc.
There are quite a few people who can code but have 0 social skills and can’t talk well or at all. In that sense I have to disagree with you, but I still wouldn’t hire them because they drag teams down.
These days there are more people in the industry that have great social skills, memorize leetcode but have 0 ability to patiently sit down and do meaningful work.
They don't just drag teams down, they destroy once great companies.
People are focusing on the > vs the >=, but for me the key point is being able to hold logic and variables in your mind over several iterations of a loop.
I’ve used similar tests in interviews before (a function that behaves like atoi and candidates have to figure out what it’s doing) and the good candidates are able to go over the code and hold values in their head across multiple iterations of a loop.
For anyone who missed the (poorly-explained) trick, the website uses a CSS trick to insert an equals sign, thus showing different code if read or if copied/pasted. That's how the author knows whether you solved it in your head or pasted it somewhere.
The thing I found particularly fascinating, because the article was talking about discarding AI applicants, is that if you take a screenshot and ask ChatGPT, then it works fine (of course, it cannot really see the extra equals sign)
So this is not really foolproof, and also makes me think that feeding screenshots to AI is probably better than copy-pasting
Sure, it's not foolproof, but a large percentage of folks would just copy&paste rather than taking the screenshot. Now that may start to change.
There was story like this on NPR recently where a professor used this method to weed out students who were using AI to answer an essay question about a book the class was assigned to read. The book mentioned nothing about Marxism, but the prof inserted unseeable text into the question such that when it was copy&pasted into an AI chat it added an extra instruction to make sure to talk about Marxism in relation to this book (which wasn't at all related to Marxism). When he got answers that talked extensively about the book in Marxist terms he knew that they had used AI.
Thanks, I was wondering how in the hell that many would get the answer wrong and what is this hidden equal sign he was talking about.
Maybe the question could be flipped on its head to filter further with "50% of applicants get this question wrong -- why?" to where someone more inquisitive like you might inspect it, but that's probably more of a frontend question.
Other way around.. if you use reader mode you generate the wrong answer ("tricked" like an LLM). At least, it's wrong according to the author/a former CTO.
It also excludes users of Lynx, cURL, possibly people using accessibility tools, those with custom/overriding style sheets.
exactly the reason why you NEVER should copy-paste code from a website into your terminal, even if that has paste protection (https://lwn.net/Articles/749992/)
Was that the trick? When copying the text, it is also >=, which is why an online search or AI tools probably give the wrong answer as the article asserts. If you correct the code then at least Claude gives the right answer.
Which is literally the point of the post. They have the = in >= at an opacity of 0 and a font-size of 1, which means it doesn't appear if styles are used properly. And their point is that candidates that copy/paste such trivial code into an AI/interpreter will get -11 because it just sees the >=.
Though a gap in their process is that, as you mentioned, various reading modes also remove styles and likewise might see it. Though a reader should filter out opacity:0 content.
Not just an "AI solver", a Python interpreter will also do it "wrong". The idea is that it's so simple that anyone qualified should be able to do it in their head, so they get the answer without the equals sign (but IMO a qualified applicant might also know it takes 5 seconds to run it in the repl and it'd be better to be correct than to use the fewest tools, or might be using a screen reader).
I think it’s important to test these systems. Let some % of candidates who get this wrong through to the next stage and see what happens. Does failing this test actually correlate with being a bad fit later?
If you want to ineffectivly filter out most candidates just auto-reject everything that doesn’t arrive on a timestamp ending in 1.
It depends why they didn't get it "correct" (asked ChatGPT bad, used Python REPL not so bad, used screen reader very not bad) and what "correct" even means for this problem.
There's a bizarro version of this guy who rejects people who do it in their head because they weren't told to not use an interpreter and he values them using the tools available to solve a problem. In his mind, the = is definitely part of the code, you should have double checked.
Oh. I was reading this on a phone, and didn’t realise there’s hidden equal sign (though it’s mentioned).
That does change it. In that I can see how false negatives may arise. Though, when hiring you generally care a lot more about false positives than negatives.
> Let some % of candidates who get this wrong through to the next stage and see what happens.
This isn't a good methodology. To do your validation correctly, you'd want to hire some percentage of candidates who get it wrong and see what happens.
Your way, you're validating whether the test is informative as to passing rate in the next stage of your hiring process, not whether it's informative as to performance on the job.
(Related: the 'stage' model of hiring is a bad idea.)
It's not for getting an interview with the CTO, but a very early filter to weed out poor candidates. But yes, if that's the only question then it's not going to discover talent.
I like it, a test so bad, it just might work! I think the trick is not the equal sign, trick is to keep it so simple and small that most qualified people will not try to short circuit it.
I got the right answer but it was so easy I went in with doubt I had done it right.
Which I understand is my issue to work on, but if I were interviewing, I'd ask candidates to verbalize or write out their thought process to get a sense of who is overthinking or doubting themselves.
That doubt is valid. Anyone reading this blog post (or in an interview, given the prevalence of trick interview questions) would know there must be some kind of trick. So, after getting the answer without finding a trick, it would be totally reasonable to conclude you must have missed something. In this case, it turns out the trick was something that was INTENDED for you to miss if you solved the problem in your head. At the end of the day, the knowledge that "I may have missed something" is just part of day to day life as an engineer. You have to make your best effort and not get paralyzed by the possibility of failure. If you did it right, had a gut feeling that something was amiss, but submitted the right answer without too much hemming and hawing, I expect that's typical for a qualified engineer.
You've built a filter that punishes verification at the hiring stage, then you're surprised when your team ships unverified code.
You get what you select for. He selected for "doesn't double-check." Congratulations, you've got a team of developers who don't double-check.
The only correct answer is... both answers (1 and -11).
That is, if you're really interested in pursuing the position.
Not only are you willing to take their tests, but you go beyond what is required, for your own benefit and edification.
That's why, when presented with the URL during the interview, you immediately load it, and right-click View Source into another tab, while simultaneously making small talk with the former CTO interviewer.
Even though you're a backender, you know enough frontend to navigate the interspersed style and html and javascript and so, you solve both puzzles, and weave into the conversation the two answers, while also deciding that this is probably not the gig for you, but who knows, let's see how they answer your questions now...
somewhat off-topic: I had an interview for an Engineering Manager position with the Head of Engineering.
They had some leet code problem prepared and I tried solving it and failed.
During the challenge, I used some python string operand (:-1) (and maybe some other stuff) that they didn't knew.
In the end, I failed the challenge as I didn't do it in the O(n) way...
These kind of stupid challenges exemplify what's wrong with hiring these days: one interviewer, usually some "vp"/"head of" decides what is the "correct" way to write some code, when they (sometimes) themselves couldn't write a line of code (since they've been "managers" for a millennia)
ps. they actually did not know what `:-1` means ...I rest my case
Were they a python engineer? I interview folks all the time in languages I don’t understand, and I ask dumb questions throughout the interview. I’ve been a professional (non-python) programmer for over a decade now and I don’t really know what :-1 means, I can guess it’s something like slicing until the last character but idk for sure.
yes, they were (theoretically) a python developer, should have mentioned this was an ML role (your guess is right, slice just before the last char)
Just to be clear: the main problem is not that they did not know what `:-1` was - there are many weird syntax additions with every version - understandable.
IMHO the problem is that there's usually a single interviewer that decides go/no go.
We all have biases, so leaving such an important decision (like hiring an EM) to one person is, (again IMHO) ...stupid .
I read the problem without reader mode by accident and got it correct, then was mildly confused when I switched to reader mode (which I always use when a site is in light mode, as I prefer dark mode on everything) and saw the ">=". In normal circumstances I would've failed this.
If you're looking for junior-ish python devs, I'd expect a good chunk of the better ones to have a python repl open and ready just as a matter of habit.
So for them, yes, it would clearly be faster to run the code than to work through it manually.
What you're doing here is selecting for candidates who are less comfortable with using the tools that they'd be expected to use every day in the role you're hiring for. It's likely to provide a negative signal.
I do think running the code would be a tiny bit faster, even if it's merely seconds either way. Opening a python REPL and pasting that would take around 5 seconds in my case. Running the code in my head would take roughly the same at first, but then if it's in an interview I'd take the time to double check. And then check a few more times because I'd expect some kind of trick here.
Considering there's no (explicit) instruction forbidding or discouraging it, I'd consider the REPL solution to be perfectly valid. In fact some interview tests specifically look for this kind of problem solving.
I get it still, I'd expect some valuable signal from this test. Candidates who execute this code are likely to do so because they really want to avoid running the code in their head, not just because it's more straightforward, and that's probably a bad sign. And pasting that into an LLM instead of a REPL would be a massive red flag.
I just don't think answering "-11" here is a signal strong enough to disqualify candidates on its own.
So I wouldn't go so far as to say that I'd fire someone for copying and pasting code, but it's definitely part of my company's culture that copying and pasting code off of a website, and especially executing it, is something heavily discouraged to the point that it doesn't really happen at my job.
I'm perfectly happy to use Stack Overflow and other resources/tutorials, blog posts etc... to find solutions to problems, but just instinctively I would never think to copy and paste a solution from these sites and incorporate it into my codebase and I sure as heck wouldn't think to execute code from some untrusted site I happened to come across.
But this may also be a consequence of the domain I work in where we take security very seriously.
You can tell how safe a code snippet is from reading it.
Like, there's no way you're going to copy a 20 line algorithm from stack overflow on balancing a red-black tree and have it encrypt your harddrive.
Obviously you still need to test the code to make sure it works and understand what it's doing, but there is very little security risk here. Just look up the functions youre using and understand the code and you're fine.
I don't like to give non value add replies...but this is hilarious in its simplicity. I honestly thought I was losing my marbles - how could anyone NOT get it ri...ohhh! You sneaky you!
Taking something as simple as this as an upfront, genuine experience sharing, and that their data is true.
If this test makes 50% of people fail, it's an amazing test! A nearly free way to cull half the applicants seems great. Honestly not useful for any big company, but feels great for SMEs.
If such a trivial test really did[1] filter out many candidates (beyond the technical limitation of the test that some client devices will render the =, as mentioned by users leveraging reader mode), I wonder if there is a greater observation we could draw from it. Personally I would assume 100% of programmers could very easily answer the question in seconds, and if people really were copy/pasting to an AI tool[2], I would assume they are so jaded and exhausted of clever reviews, nine round hiring funnels, leetcode, blackboard code, and so on, that it's just another of countless applications they've made and they just don't care anymore.
[1] Yeah, I'm super cynical about stories like this, and know that many if not most are just invented shower thoughts manifested into a fictional reality.
[2] Alternately they're pasting to a normal editor -- even notepad -- for a more coherent programming font, where again the = appears.
Did you read the article? That's kind of the point, that you don't see it visually, but when you copy/paste it then it's there. If you evaluate the code with your brain you'll get the correct answer, but copy and paste it and the answer is different.
Peeking at the source, it's just a zero-width div, which is not accomodating of people with disabilities. This might open you up to litigation if you disqualify a blind person on the grounds he gave the wrong answer 'using AI', when he might have just been answering the question his screen reader read out.
This is an excellent point. I did not think of that.
There's an easier fix. Have the candidates state if they have a vision disability first and then send them down a different pathway for validation. There aren't that many, so it's not going to be costly or anything.
Instead of the zero-width div, you could set up an event listener for the copy event (using addEventListener() method) which calls .clipboardData.setData() on the ClipboardEvent to change it to your modified code.
That should avoid messing things up for people with screen readers while still trapping the copy+pasters.
Or add aria-hidden=true
I've had clipboard events and the clipboard API disabled in my browser to prevent websites from intercepting them for ages. I can't be the only one.
My take on that is that the very slim minority who does this are also likely passable through this very blunt hiring tool anyways.
How about don't let them copy it at all. Show them the problem on a shared screen and they should speak the answer in 15 or 30 seconds.
If someone is visually impaired, it's short enough you can just read the problem text to them.
> Show them the problem on a shared screen
I'm pretty sure the intent is to weed people out well before they get to a point where you could share a screen with them. He mentioned a few people "resubmitted the application", so sure this is probably an initial step.
Can't read the problem when the point is to catch those who copy-paste the code.
For better or worse, screen readers tend to be less easily tricked by things like that now.
You got different source than I did. For me it's a span of finite width, but with a font size of 1px.
What most interviews get wrong is that there are usually just a few "bullet points" that if you see them, you instantly know that the the candidate at least has the technical chops.
Instead of creating a test that specifically aims for those bullet points, many technical assessments end up with convoluted scaffolding when actually, only those key bullet points really matter.
Like the OP, I can usually tell if a candidate has the technical chops in just a handful of really straightforward questions for a number of technical domains.
If you have a test that can identify a good candidate quickly then you have honestly struck gold and can genuinely use that to start your own company. I mean this with absolute sincerity.
One of the absolute hardest part of my business is really hiring qualified candidates, and it's really demoralizing and time consuming and unbelievably expensive. The best that I've managed to do is the same that pretty much every other business owner says... which is that I can usually (not always) filter out the bad candidates (along with some false negatives), and have some degree of luck in hiring good candidates (with some false positives).
Good candidates are not universally good, they are good for you after hire.
One of the best business analysts I worked with (also a profession, mind you) was almost fired when working under an old, grumpy and clearly underskilled one.
I was hired once without interview into a unicorn, was loved by colleagues but hated the work, the business and the industry, then left rather quickly.
See? There are mismatches and unknown unknowns, not just bad or good developers.
yup, the worst performance i did on any job was due to the complete unavailability of a manager when i was a team of one. and then that manager would not even fire me. i had to quit to get out of there.
Yeah sourcing developers / collaborating with developers is a huge barrier of entry. More than others factors such as deciding what software to produce for the market
I’d love for you to elaborate :)
But along that thought, I’ve always held that a human conversation is the best filter. I’ll ask you what do you work on recently, what did you learn, what did you mess up, what did you hate about the tool / language / framework.
I strongly believe your ability to articulate your situation corresponds with your ability to do the job.
Take your current technical assessment and think about the types of responses or code submissions that really impressed you. What was special about them? What did you see in the response that drew a positive reaction from your team?
Can you re-frame your process or the prompt to only elicit those specific responses?
So instead of a whole exercise of building a React app or a whole backend API, for example, what would really "wow" you if you saw the candidate do it in the submission for a project? Could you re-frame your whole process so you only target those specific responses and elicit specific outputs?
Now that you've taken what was previously a 2 hour coding exercise (for example) and distilled down to 3-4 key questions, you can seek the same outputs in 15-30 minutes instead.
There are several advantages to this:
1) Many times, candidates know the answer but they actually can't figure out what you're looking for when there's a lot of cruft around the question/problem being solved. You can end up losing candidates that know how to solve the problem the way you want, but because of the way the question was posed, the objective is opaque.
2) It saves a lot of time for both sides. Interviewer doesn't have to review a big submission, candidate doesn't have to waste time doing a long project.
3) By condensing the cycle, you can evaluate more candidates and you can hopefully select a top candidate before they get another opportunity. You shorten the cycle time because the candidate doesn't have to find free time to do a project or sit down for long interviews, you don't need to have people review the code submissions, etc.
There are quite a few people who can code but have 0 social skills and can’t talk well or at all. In that sense I have to disagree with you, but I still wouldn’t hire them because they drag teams down.
These days there are more people in the industry that have great social skills, memorize leetcode but have 0 ability to patiently sit down and do meaningful work.
They don't just drag teams down, they destroy once great companies.
But how do you know if the remote candidate used AI or his brain?
People are focusing on the > vs the >=, but for me the key point is being able to hold logic and variables in your mind over several iterations of a loop.
I’ve used similar tests in interviews before (a function that behaves like atoi and candidates have to figure out what it’s doing) and the good candidates are able to go over the code and hold values in their head across multiple iterations of a loop.
There are many candidates who can’t do this.
For anyone who missed the (poorly-explained) trick, the website uses a CSS trick to insert an equals sign, thus showing different code if read or if copied/pasted. That's how the author knows whether you solved it in your head or pasted it somewhere.
The thing I found particularly fascinating, because the article was talking about discarding AI applicants, is that if you take a screenshot and ask ChatGPT, then it works fine (of course, it cannot really see the extra equals sign)
So this is not really foolproof, and also makes me think that feeding screenshots to AI is probably better than copy-pasting
Sure, it's not foolproof, but a large percentage of folks would just copy&paste rather than taking the screenshot. Now that may start to change.
There was story like this on NPR recently where a professor used this method to weed out students who were using AI to answer an essay question about a book the class was assigned to read. The book mentioned nothing about Marxism, but the prof inserted unseeable text into the question such that when it was copy&pasted into an AI chat it added an extra instruction to make sure to talk about Marxism in relation to this book (which wasn't at all related to Marxism). When he got answers that talked extensively about the book in Marxist terms he knew that they had used AI.
This didn’t work for me because Reader Mode popped up and showed the “hidden” equals sign.
Thanks, I was wondering how in the hell that many would get the answer wrong and what is this hidden equal sign he was talking about.
Maybe the question could be flipped on its head to filter further with "50% of applicants get this question wrong -- why?" to where someone more inquisitive like you might inspect it, but that's probably more of a frontend question.
Best test is to listen to 2h meeting without too much details and have to figure out how to ship the feature.
Listen for 2 hours to how the customer thinks it should be done, then do it the way that actually gets the result they asked for.
4real
better yet, listen to a 1h meeting and compare notes/action points
That was REALLY weird to read. In reader mode the comparator in the first conditional was >=. But without reader mode it was just >.
Apparently they don't want to hire people who don't use reader mode.
But then the question is, how do you reach people who filter out the job ads?
Other way around.. if you use reader mode you generate the wrong answer ("tricked" like an LLM). At least, it's wrong according to the author/a former CTO.
It also excludes users of Lynx, cURL, possibly people using accessibility tools, those with custom/overriding style sheets.
exactly the reason why you NEVER should copy-paste code from a website into your terminal, even if that has paste protection (https://lwn.net/Articles/749992/)
Was that the trick? When copying the text, it is also >=, which is why an online search or AI tools probably give the wrong answer as the article asserts. If you correct the code then at least Claude gives the right answer.
The trick is that the = has CSS styling with "opacity: 0; font-size: 1px;".
In normal mode the question is different than in reader mode, or when copied.
Thus, if you get the wrong answer, you "cheated" (or used reader mode)
Which is literally the point of the post. They have the = in >= at an opacity of 0 and a font-size of 1, which means it doesn't appear if styles are used properly. And their point is that candidates that copy/paste such trivial code into an AI/interpreter will get -11 because it just sees the >=.
Though a gap in their process is that, as you mentioned, various reading modes also remove styles and likewise might see it. Though a reader should filter out opacity:0 content.
> into an AI solver
Not just an "AI solver", a Python interpreter will also do it "wrong". The idea is that it's so simple that anyone qualified should be able to do it in their head, so they get the answer without the equals sign (but IMO a qualified applicant might also know it takes 5 seconds to run it in the repl and it'd be better to be correct than to use the fewest tools, or might be using a screen reader).
I'm one of those hit by the reader mode "bug". I really wouldn't want the mode to give me anything beyond raw text+images, as that can be abused.
What did it say? Their website doesn't load.
Oh now it works, ok I got it right, I am hired
[dead]
I think it’s important to test these systems. Let some % of candidates who get this wrong through to the next stage and see what happens. Does failing this test actually correlate with being a bad fit later?
If you want to ineffectivly filter out most candidates just auto-reject everything that doesn’t arrive on a timestamp ending in 1.
> Let some % of candidates who get this wrong through
Really, the better test would be to not discriminate on it before you know it's useful, but store their answer to compare later.
You're right. I agree.
_How_ can you be a good hire for a _software engineering_ position, if you can’t get that one correct though?
It depends why they didn't get it "correct" (asked ChatGPT bad, used Python REPL not so bad, used screen reader very not bad) and what "correct" even means for this problem.
There's a bizarro version of this guy who rejects people who do it in their head because they weren't told to not use an interpreter and he values them using the tools available to solve a problem. In his mind, the = is definitely part of the code, you should have double checked.
Oh. I was reading this on a phone, and didn’t realise there’s hidden equal sign (though it’s mentioned).
That does change it. In that I can see how false negatives may arise. Though, when hiring you generally care a lot more about false positives than negatives.
> Let some % of candidates who get this wrong through to the next stage and see what happens.
This isn't a good methodology. To do your validation correctly, you'd want to hire some percentage of candidates who get it wrong and see what happens.
Your way, you're validating whether the test is informative as to passing rate in the next stage of your hiring process, not whether it's informative as to performance on the job.
(Related: the 'stage' model of hiring is a bad idea.)
My take-away: if you're doing simple coding problems like this for an interview with the "CTO", that's a very bad smell.
It's not for getting an interview with the CTO, but a very early filter to weed out poor candidates. But yes, if that's the only question then it's not going to discover talent.
I like it, a test so bad, it just might work! I think the trick is not the equal sign, trick is to keep it so simple and small that most qualified people will not try to short circuit it.
I got the right aswer, because I'm sitting on my Toilet. At my desk I would simply run it and fail.
I got the right answer but it was so easy I went in with doubt I had done it right.
Which I understand is my issue to work on, but if I were interviewing, I'd ask candidates to verbalize or write out their thought process to get a sense of who is overthinking or doubting themselves.
> I went in with doubt I had done it right.
And if in your doubt you decided to run it through the interpreter to get the "real" answer, whoops, you're rejected.
That's cheating (even if it just assures you that your answer is correct)
Is it? The page implies it's allowed, but they want people who think running it is "more of a hassle".
Oh right, it seems to be allowed.
I don't know then. I can open up a terminal with a python and paste it really fast, faster than run it in my head.
That doubt is valid. Anyone reading this blog post (or in an interview, given the prevalence of trick interview questions) would know there must be some kind of trick. So, after getting the answer without finding a trick, it would be totally reasonable to conclude you must have missed something. In this case, it turns out the trick was something that was INTENDED for you to miss if you solved the problem in your head. At the end of the day, the knowledge that "I may have missed something" is just part of day to day life as an engineer. You have to make your best effort and not get paralyzed by the possibility of failure. If you did it right, had a gut feeling that something was amiss, but submitted the right answer without too much hemming and hawing, I expect that's typical for a qualified engineer.
You've built a filter that punishes verification at the hiring stage, then you're surprised when your team ships unverified code. You get what you select for. He selected for "doesn't double-check." Congratulations, you've got a team of developers who don't double-check.
The only correct answer is... both answers (1 and -11).
That is, if you're really interested in pursuing the position.
Not only are you willing to take their tests, but you go beyond what is required, for your own benefit and edification.
That's why, when presented with the URL during the interview, you immediately load it, and right-click View Source into another tab, while simultaneously making small talk with the former CTO interviewer.
Even though you're a backender, you know enough frontend to navigate the interspersed style and html and javascript and so, you solve both puzzles, and weave into the conversation the two answers, while also deciding that this is probably not the gig for you, but who knows, let's see how they answer your questions now...
is this a joke
Wouldn't this eventually get you sued for failure of ADA compliance?
Safari's reader sees the =. Edge does not.
somewhat off-topic: I had an interview for an Engineering Manager position with the Head of Engineering.
They had some leet code problem prepared and I tried solving it and failed.
During the challenge, I used some python string operand (:-1) (and maybe some other stuff) that they didn't knew.
In the end, I failed the challenge as I didn't do it in the O(n) way...
These kind of stupid challenges exemplify what's wrong with hiring these days: one interviewer, usually some "vp"/"head of" decides what is the "correct" way to write some code, when they (sometimes) themselves couldn't write a line of code (since they've been "managers" for a millennia)
ps. they actually did not know what `:-1` means ...I rest my case
Were they a python engineer? I interview folks all the time in languages I don’t understand, and I ask dumb questions throughout the interview. I’ve been a professional (non-python) programmer for over a decade now and I don’t really know what :-1 means, I can guess it’s something like slicing until the last character but idk for sure.
yes, they were (theoretically) a python developer, should have mentioned this was an ML role (your guess is right, slice just before the last char)
Just to be clear: the main problem is not that they did not know what `:-1` was - there are many weird syntax additions with every version - understandable.
IMHO the problem is that there's usually a single interviewer that decides go/no go.
We all have biases, so leaving such an important decision (like hiring an EM) to one person is, (again IMHO) ...stupid .
I read the problem without reader mode by accident and got it correct, then was mildly confused when I switched to reader mode (which I always use when a site is in light mode, as I prefer dark mode on everything) and saw the ">=". In normal circumstances I would've failed this.
Why wouldn't a qualified developer just run the code? Takes two seconds instead of ... whatever you're going for here.
Is it really faster to run the code than to just answer it? Like are people struggling to answer this question?
If you're looking for junior-ish python devs, I'd expect a good chunk of the better ones to have a python repl open and ready just as a matter of habit.
So for them, yes, it would clearly be faster to run the code than to work through it manually.
What you're doing here is selecting for candidates who are less comfortable with using the tools that they'd be expected to use every day in the role you're hiring for. It's likely to provide a negative signal.
I do think running the code would be a tiny bit faster, even if it's merely seconds either way. Opening a python REPL and pasting that would take around 5 seconds in my case. Running the code in my head would take roughly the same at first, but then if it's in an interview I'd take the time to double check. And then check a few more times because I'd expect some kind of trick here.
Considering there's no (explicit) instruction forbidding or discouraging it, I'd consider the REPL solution to be perfectly valid. In fact some interview tests specifically look for this kind of problem solving.
I get it still, I'd expect some valuable signal from this test. Candidates who execute this code are likely to do so because they really want to avoid running the code in their head, not just because it's more straightforward, and that's probably a bad sign. And pasting that into an LLM instead of a REPL would be a massive red flag.
I just don't think answering "-11" here is a signal strong enough to disqualify candidates on its own.
I can run the code in my head. I can probably be right. I can be 99.9999% sure I am right.
OR:
I could run the code in the interpreter and be 100% certain.
I know what attitude I would prefer out of my developers.
the attitude of at least checking the code before running it, I suppose? Or is the curl | sudo bash approach more preferred nowadays?
Isnt the point of the article that blindly copying and pasting this code leads to the wrong answer?
I agree many developers do blindly copy and paste things off the Internet, but I don't think that's something to desire or celebrate.
The point of this article is this person punishes people who copy and paste the code ... into their python interpreter to check.
How many cases have you faced in your real job where this has happened?
So I wouldn't go so far as to say that I'd fire someone for copying and pasting code, but it's definitely part of my company's culture that copying and pasting code off of a website, and especially executing it, is something heavily discouraged to the point that it doesn't really happen at my job.
I'm perfectly happy to use Stack Overflow and other resources/tutorials, blog posts etc... to find solutions to problems, but just instinctively I would never think to copy and paste a solution from these sites and incorporate it into my codebase and I sure as heck wouldn't think to execute code from some untrusted site I happened to come across.
But this may also be a consequence of the domain I work in where we take security very seriously.
You can tell how safe a code snippet is from reading it.
Like, there's no way you're going to copy a 20 line algorithm from stack overflow on balancing a red-black tree and have it encrypt your harddrive.
Obviously you still need to test the code to make sure it works and understand what it's doing, but there is very little security risk here. Just look up the functions youre using and understand the code and you're fine.
There's a hidden equal sign, so if you copy-paste the code you get a different answer than if you just do it in your head.
If you copy it, you will get an extra equals sign due to CSS trickery.
Go on then, what answer do you get? Was it right?
Personally I filter out 100 % of employers that think tests like these are viable filtering criteria.
I don't like to give non value add replies...but this is hilarious in its simplicity. I honestly thought I was losing my marbles - how could anyone NOT get it ri...ohhh! You sneaky you!
Funny, I got it wrong because the reader view showed the equals sign, but I had to go to the website to check my answer, which does hide it.
Using (Safari) screen reader (I do, I have a hard time reading text on white background), the `=` becomes visible
IMHO this is a dumb test
Taking something as simple as this as an upfront, genuine experience sharing, and that their data is true.
If this test makes 50% of people fail, it's an amazing test! A nearly free way to cull half the applicants seems great. Honestly not useful for any big company, but feels great for SMEs.
Wouldn't the correct answer be 5?
Iteration starts from the first element. X is 3, 3, then 5.
Thanks to everyone who replied. I failed to account for the else condition when x is not > 3.
The logic results in 3 + 3 - 5 = 1, so it's not 5, sorry.
I'd love to know what logic path you followed to get 5 though!
I did it in my head but got 4 (3 * 3 - 5), so I fail, too. Hopefully I'd be paying closer attention if I was actually applying for a job.
where did your multiplication come from?
From not paying close attention :)
Presumably you'd get it if you short-circuited and didn't evaluate the else part.
3 comes before. Read it again.
It’s in a loop
If such a trivial test really did[1] filter out many candidates (beyond the technical limitation of the test that some client devices will render the =, as mentioned by users leveraging reader mode), I wonder if there is a greater observation we could draw from it. Personally I would assume 100% of programmers could very easily answer the question in seconds, and if people really were copy/pasting to an AI tool[2], I would assume they are so jaded and exhausted of clever reviews, nine round hiring funnels, leetcode, blackboard code, and so on, that it's just another of countless applications they've made and they just don't care anymore.
[1] Yeah, I'm super cynical about stories like this, and know that many if not most are just invented shower thoughts manifested into a fictional reality.
[2] Alternately they're pasting to a normal editor -- even notepad -- for a more coherent programming font, where again the = appears.
How am I supposed to see this equal sign? My browser just doesn't render it.
This is nonsense.
Did you read the article? That's kind of the point, that you don't see it visually, but when you copy/paste it then it's there. If you evaluate the code with your brain you'll get the correct answer, but copy and paste it and the answer is different.
[dead]