> The regex methods are unbelievably fast regardless of string length.
I immediately pinned regex as the winner, and here is why:
In Python, you can almost always count on specialized functionality in the stdlib to be faster than any Python code you write, because most of it has probably been optimized in CPython by now.
Second, I have to ask, why would someone think that regular expressions, a tool specifically designed to search strings, would be slower than any other tool? Of course it's going to be the fastest! (At least in a language like Python.)
But it's not the fastest, it's actually incredibly slow and in general regexs are very slow. The key insight is that string processing in Python is very slow, and that regex here outperforms everything else because it's the only approach that is implemented in C.
With that insight it should follow that using another implementation in C should outperform even the regex, and indeed the following simple Python method that this article for whatever reason ignored vastly outperforms everything:
def contains_vowel_find(s):
for c in "aeiouAEIOU":
if s.find(c) != -1:
return True
return False
That's because s.find(c) is implemented in C.
In my benchmark this approach is 10 times faster than using a regex:
Playing around, I found that the generator approach was competitive with this if you permuted the loop nest, and often even slightly faster (at the 1000 character length):
def any_gen_perm(s):
return any(c in s for c in "aeiouAEIOU")
I think the crux is that you want the inner loop inside the fast C-implemented primitive to be the one iterating over the longer string, and to leave the outer loop in Python to iterate over the shorter string. With both my version and yours, the Python loop only iterates and calls into the C search loop 10 times, so there's less interpreter overhead.
I suspect that permuting the loop nest in similar variations will also see a good speed up, and indeed trying just now:
def loop_in_perm(s):
for c in "aeiouAEIOU":
if c in s:
return True
return False
seems to give the fastest result yet. (Around twice as fast as the permuted generator expression and your find implementation on my machine, with 100 and 1000 character strings.)
I really don't think this is true. If you assume that the string is ASCII, a well-optimized regex for a pattern of this type should be a tight loop over a few instructions that loads the next state from a small array. The small array should fit completely in cache as well. This is basically branchless. I expect that you could process 1 character per cycle on modern CPUs.
If the string is short (~100 characters or less, guessing), I expect this implementation to outperform the find() implementation by far as find() almost certainly will incur at least one more branch mispredict than the regex. For longer strings, it depends on the data, as the branchless regex implementation will scan through the whole string, so find() will be faster if there is an vowel early on in the string. Find still might be faster even if there are no vowels; the exact time in that case depends on microarchitecture.
For non-ASCII, things are a bit trickier, but one can construct a state machine that is not too much larger.
Sure, if you make a bunch of assumptions and manually implement how you think a regex will compile your code, allowing the optimizer to take your compile time implementation and make a finely tuned algorithm specifically for one use case, you can make something that outperforms what is kind of a dumb algorithm.
But if you use an actual regex the way people actually use them, using either the one provided by their standard library or one that is readily available then regex is pretty slow. Certainly, in principle a regex is neither fast or slow, it's a declarative description of a set of strings. Any claim about its performance rests on particular implementations.
For example you wrote out a benchmark that presumably is a lot faster than a naive search... but notice you didn't use the standard library <regex>, instead you manually implemented an algorithm to perform a search and then just decided that this is what a regex implementation would have done anyways (ignoring that by implementing it directly in source code, the optimizer can then fine tune it).
So now... there is a 10% chance that you genuinely didn't know that C++ has a <regex> library that you could have used, or you did in fact know that C++ has such a library and you chose not to you use it because... you also know it's very slow.
I did the benchmark myself and the actual regex library is about 15x slower, I also compared it to boost's regex library which is about 10x slower:
Writing implementations in C++, using a reasonable encoding of how an efficient regex compiler might compile the vowel regex, the regex implementation outperforms the loop-based implementations significantly in all cases except for long strings with vowels on M1 Max MBP.
Strings were generated randomly following the regex [0-9A-Za-z]. Short strings were between 5-20 chars. Long strings were 5000 chars.
The speedups are:
- Short strings & with vowels: regex is 2.6x faster
- Short strings & no vowels: regex is 5.9x faster
- Long strings & with vowels: loop is 329x faster. This is expected, as because the regex implementation is branchless, it must always search through the whole string.
- Long strings & no vowels: regex is 1.5x faster.
If you add an early return to the regex implementation, the regex with early return becomes strictly faster than the loop versions. The early return is slower than the no early return for the short strings & no vowels case because the extra branch has a cost. Full outputs in a comment at the bottom of the linked file.
Why do you loop over haystack multiple times though? If you iterate over long string once and write fixed loop with vowels checks in way that will be friendly to autovectorize optimization it might be faster and more idiomatic C or C++.
Thanks for reminding me, I meant to also benchmark the interchanged version. I updated the file above with the new benchmark results.
The original commenter called `find()` once per vowel, so that's why I benchmarked the regex against the less-idiomatic code.
The interchanged version (loop over haystack outside, over vowels inside) is (mildly) faster than all the regex versions except for short strings & no vowels.
One thing to note is that all the loop versions are not easily generalizable to non-ASCII strings, while the regex version is fairly easily.
I poked that loop in compiler explorer for few minutes and I think its allergic to autovectorization. Lets assume that properly vectorized regexp from proper language in the future wins :)
The more you learn about regexes, the more you learn that you can’t make general statements about their performance. Performance is contingent upon the engine, its algorithms, its implementation, the patterns, and the input.
> Second, I have to ask, why would someone think that regular expressions, a tool specifically designed to search strings, would be slower than any other tool? Of course it's going to be the fastest! (At least in a language like Python.)
I know that in Go doing string operations "manually" is almost always faster than regexps. In a quick check, about 10 times faster for this case (~120ns vs. ~1150ns for 100 chars where the last is a vowel).
Of course Python is not Go, but I wouldn't actually expect simple loops in Python to be that much slower – going from "10x faster" to "2x slower" is quite a jump.
Perhaps "yeah duh obvious" if you're familiar with Python and its performance characteristics, but many people aren't. Or at least I'm not. Based on my background, I wouldn't automatically expect it.
>Second, I have to ask, why would someone think that regular expressions, a tool specifically designed to search strings, would be slower than any other tool? Of course it's going to be the fastest! (At least in a language like Python.)
Flexibly, search string flexibly.
Why you think that we cannot achieve faster search in non flexible fashion?
A few years ago I came across the article "SIMD-friendly algorithms for substring searching". The "Generic SIMD" section into and example is small and quite understandable. I modified it to implement an LZ77 sliding window in Zig.*
* Or rather, I tried my best. I burnt out on that project because I kept jumping back and forth between making a proper DEFLATE implementation or something bespoke. The SIMD stuff was really tough and once I got it "working", I figured I got all I needed from the project and let it go.
Assume use of 8 bit characters. Declare a constant 256 entry array pre-filled with all False except for the five (or six) vowel characters. This is baked into the code and not initialized at runtime.
Now for each character c in the input string, simply do an array index and see if it is true (a vowel) or not. This avoids either five conditionals, or a loop over the string 'aeiou'. The vowel test is constant time regardless of the character value.
Which is actually part of why I clicked into the article - I expected it to get into the complexity of trying to detect if 'y' was a vowel as part of the search, and instead got a mostly banal python text search article.
You can see the technical rules for when 'Y' is a vowel in english here:
I dunno about those rules; there are exceptions. Yttrium. Yggdrasil. Probably others, but those from off the top of my head.
I think the best way to define when Y is a vowel is when it's not a consonant. Basically, if you make the sound that Y represents in the word "yes", it's a consonant. Otherwise, it's a vowel. (At least, no exceptions come to mind.)
It depends on the word, and possibly even on dialect. There isn’t a 1:1 mapping between characters and vowels. Vowels are fundamentally a phonetic concept, not a graphemic one.
This is also why Unicode doesn’t have a “vowel” character property. Otherwise you could use a regex like `\p{Vowel}`.
Yes, this is true. Many people seem to think of words as written text and not as spoken words, so they focus on analyzing the characters is a word and not how it sounds.
I had a linguistics professor say something like “Writing is parasitic on speech”
TIL: When y forms a diphthong—two vowel sounds joined in one syllable to form one speech sound, such as the "oy" in toy, "ay" in day, and "ey" in monkey—it is also regarded as a vowel. Typically, y represents a consonant when it starts off a word or syllable, as in yard, lawyer, or beyond.
In English it is pretty much unpredictable whether Y is a vowel or a consonant.
While for older English words there is a complex set of rules mentioned by another poster for determining whether Y is a vowel, as mentioned by yet another poster, English also includes more recent borrowings from languages with other spelling rules for Y.
At its origin, Y was a vowel, not a consonant. It was added to the Latin alphabet for writing the front rounded vowel that is written "ü" in German, "u" in French or "y" in Scandinavian languages.
It is very unfortunate that in English, and in some other languages that have followed English, Y has been reassigned to write consonant "i". This has created a lot of problems due to the mismatches between the spelling rules of different languages. The rule that is most consistent with the older usage would have been to use J for consonant "i", like in German and other languages inspired by it. However in many Romance languages the pronunciation of consonant "i" has changed in time, leading to other 3 phonetic values for the letter J, like in English (i.e. Old French), like in French/Portuguese and like in Spanish.
So the result is that both for Y and for J there are great differences in pronunciation between the European languages, and the many words using such letters that have been borrowed between languages create a lot of complexity in spelling rules.
To paraphrase Arthur Dent, "this must be a strange new usage of the word 'fastest' with which I am not previously familiar".
The fastest way to detect a vowel in a string on any reasonable architecture (Intel or AMD equipped with SIMD of some kind) is using 3-4 instructions which will process 16/32/64 (depends on SIMD length) bytes at once. Obviously access to these will require using a Python library that exposes SIMD.
Leaving SIMD aside, a flat byte array of size 256 will outperform a bitmap since it's always faster to look up bytes in an array than bits in a bitmap, and the size is trivial.
Things don't have to be complicated or pretty to be fast or good. For a UTF-8 or ASCII (English) string:
for(char c in string)
if(c & 1 == 0)
continue;
switch(c & 0x1F)
case(a & 0x1F)
return true; //a or A
case(e & 0x1F)
return true; //e or E
case(i & 0x1F)
return true; //i or I
case(o & 0x1F)
return true; //o or O
case(u & 0x1F)
return true; //u or U
default
continue;
Checking for consonants is about as free as it gets. 50-70% of characters in English text are consonants. By checking one bit, you can eliminate that many checks across the whole string. This should also more or less apply to any text encoding; this technique comes from an artifact of the alphabet itself. It just so happens that all English vowels (including Y and W) fall on odd indices within the alphabet.
Characters aren't magic! They're just numbers. You can do math and binary tricks on them just like you would with any other primitive type. Instead of thinking about finding letters within a string, sometimes you get better answers by asking "how do I find one of a set of numbers within this array of numbers?". It seems to me that a lot of programmers consider these to be entirely disjoint problems. But then again, I'm an embedded programmer and as far as I'm concerned characters are only ever 8 bits wide. String problems are numeric problems for me.
While I don't want to discourage people from exploring problem spaces, do understand that the problem space of ASCII has been trodden to the bedrock. Many problems like "does this string contain a vowel" have been optimally solved for decades. Your explorations should include looking at how we solved these problems in the 20th century, because those solutions are likely still extremely relevant.
The problem with the 'characters are just numbers' approach is that they're not _just_ numbers.. with the advent of unicode, they're _sequences of numbers_... so bytes thus can mean different things when part of a a sequence than when standalone.
That said, since they're numbers, we should use the most efficient checks for them... which are likely vectorized SIMD assembly instructions particular to your hardware. And which I've seen no one mention.
Yes, that's it. Vectorized SIMD annihilates this problem, a space I've been working in since 2006 and it wasn't all that new even then. A close second would be a heavily optimized (pipelined and less branchy) table or bitvector lookup. Doing anything that involves lots of control flow, like the grandparent post, will be slow as a wet week with our without bit manipulation tricks due to the inherently unpredictable nature of the branches (subject to our input).
> The lut is zeroed except for the indices corresponding to the vowels, which contain it self
So in the comparison vs. the original str, vowels get a true and consonants get a false. Doesn't the "==" then returns true if all the chars are vowels?
I was curious if there were any interesting bit-masking patterns in ASCII for vowels that could be exploited, so I shamelessly asked ChatGPT and it gave a pretty nice insight:
- Setting bit 5 high forces lowercase for the input letter
- masking with `& 31` gives an index from 0-25 for the input letter
Then you can the `bt` instruction (in x86_64) to bit-test against the set of bits for a,e,i,o,u (after lowercasing) and return whether it matches, in a single instruction.
I'm sure there's other cool ways to test multiple vowels at once using AVX2 or AVX-512, I didn't really get that far. I just thought the bit-test trick was pretty sweet.
TBH any mainstream language (and most non-mainstream ones) will be better than Python at this. Python can be pretty useful, but it's not known for having reasonable performance.
You can do extremely trivial binary checks on ASCII, UTF-8 (and most? other) encoding schemes. All vowels including W and Y contain a 1 in the lowest bit. Then by comparing bits 1-4, you can do a trivial case-insensitive comparison. You detect case by checking bit 5. 0 is upper, 1 is lower.
> The regex methods are unbelievably fast regardless of string length.
I immediately pinned regex as the winner, and here is why:
In Python, you can almost always count on specialized functionality in the stdlib to be faster than any Python code you write, because most of it has probably been optimized in CPython by now.
Second, I have to ask, why would someone think that regular expressions, a tool specifically designed to search strings, would be slower than any other tool? Of course it's going to be the fastest! (At least in a language like Python.)
But it's not the fastest, it's actually incredibly slow and in general regexs are very slow. The key insight is that string processing in Python is very slow, and that regex here outperforms everything else because it's the only approach that is implemented in C.
With that insight it should follow that using another implementation in C should outperform even the regex, and indeed the following simple Python method that this article for whatever reason ignored vastly outperforms everything:
That's because s.find(c) is implemented in C.In my benchmark this approach is 10 times faster than using a regex:
https://gist.github.com/kranar/24323e81ea1c34fb56aff621f6c09...
Playing around, I found that the generator approach was competitive with this if you permuted the loop nest, and often even slightly faster (at the 1000 character length):
I think the crux is that you want the inner loop inside the fast C-implemented primitive to be the one iterating over the longer string, and to leave the outer loop in Python to iterate over the shorter string. With both my version and yours, the Python loop only iterates and calls into the C search loop 10 times, so there's less interpreter overhead.I suspect that permuting the loop nest in similar variations will also see a good speed up, and indeed trying just now:
seems to give the fastest result yet. (Around twice as fast as the permuted generator expression and your find implementation on my machine, with 100 and 1000 character strings.)> in general regexs are very slow
I really don't think this is true. If you assume that the string is ASCII, a well-optimized regex for a pattern of this type should be a tight loop over a few instructions that loads the next state from a small array. The small array should fit completely in cache as well. This is basically branchless. I expect that you could process 1 character per cycle on modern CPUs.
If the string is short (~100 characters or less, guessing), I expect this implementation to outperform the find() implementation by far as find() almost certainly will incur at least one more branch mispredict than the regex. For longer strings, it depends on the data, as the branchless regex implementation will scan through the whole string, so find() will be faster if there is an vowel early on in the string. Find still might be faster even if there are no vowels; the exact time in that case depends on microarchitecture.
For non-ASCII, things are a bit trickier, but one can construct a state machine that is not too much larger.
Sure, if you make a bunch of assumptions and manually implement how you think a regex will compile your code, allowing the optimizer to take your compile time implementation and make a finely tuned algorithm specifically for one use case, you can make something that outperforms what is kind of a dumb algorithm.
But if you use an actual regex the way people actually use them, using either the one provided by their standard library or one that is readily available then regex is pretty slow. Certainly, in principle a regex is neither fast or slow, it's a declarative description of a set of strings. Any claim about its performance rests on particular implementations.
For example you wrote out a benchmark that presumably is a lot faster than a naive search... but notice you didn't use the standard library <regex>, instead you manually implemented an algorithm to perform a search and then just decided that this is what a regex implementation would have done anyways (ignoring that by implementing it directly in source code, the optimizer can then fine tune it).
So now... there is a 10% chance that you genuinely didn't know that C++ has a <regex> library that you could have used, or you did in fact know that C++ has such a library and you chose not to you use it because... you also know it's very slow.
I did the benchmark myself and the actual regex library is about 15x slower, I also compared it to boost's regex library which is about 10x slower:
https://gist.github.com/kranar/9a6e738b34c17fe2282c5cc8d74af...
Putting benchmarks where my mouth is: https://github.com/tylerhou/benchmarks/blob/main/vowels-benc...
Writing implementations in C++, using a reasonable encoding of how an efficient regex compiler might compile the vowel regex, the regex implementation outperforms the loop-based implementations significantly in all cases except for long strings with vowels on M1 Max MBP.
Strings were generated randomly following the regex [0-9A-Za-z]. Short strings were between 5-20 chars. Long strings were 5000 chars.
The speedups are:
- Short strings & with vowels: regex is 2.6x faster
- Short strings & no vowels: regex is 5.9x faster
- Long strings & with vowels: loop is 329x faster. This is expected, as because the regex implementation is branchless, it must always search through the whole string.
- Long strings & no vowels: regex is 1.5x faster.
If you add an early return to the regex implementation, the regex with early return becomes strictly faster than the loop versions. The early return is slower than the no early return for the short strings & no vowels case because the extra branch has a cost. Full outputs in a comment at the bottom of the linked file.
Why do you loop over haystack multiple times though? If you iterate over long string once and write fixed loop with vowels checks in way that will be friendly to autovectorize optimization it might be faster and more idiomatic C or C++.
Thanks for reminding me, I meant to also benchmark the interchanged version. I updated the file above with the new benchmark results.
The original commenter called `find()` once per vowel, so that's why I benchmarked the regex against the less-idiomatic code.
The interchanged version (loop over haystack outside, over vowels inside) is (mildly) faster than all the regex versions except for short strings & no vowels.
One thing to note is that all the loop versions are not easily generalizable to non-ASCII strings, while the regex version is fairly easily.
I poked that loop in compiler explorer for few minutes and I think its allergic to autovectorization. Lets assume that properly vectorized regexp from proper language in the future wins :)
Very nice find (pun intended). I added an update at the end with your solution and a link to this comment.
The more you learn about regexes, the more you learn that you can’t make general statements about their performance. Performance is contingent upon the engine, its algorithms, its implementation, the patterns, and the input.
I've always found regexes to do a string search faster than other methods, but there is always more to learn! Thanks for the lesson today.
> Second, I have to ask, why would someone think that regular expressions, a tool specifically designed to search strings, would be slower than any other tool? Of course it's going to be the fastest! (At least in a language like Python.)
I know that in Go doing string operations "manually" is almost always faster than regexps. In a quick check, about 10 times faster for this case (~120ns vs. ~1150ns for 100 chars where the last is a vowel).
Of course Python is not Go, but I wouldn't actually expect simple loops in Python to be that much slower – going from "10x faster" to "2x slower" is quite a jump.
Perhaps "yeah duh obvious" if you're familiar with Python and its performance characteristics, but many people aren't. Or at least I'm not. Based on my background, I wouldn't automatically expect it.
It's worth noting here that Go has a notoriously slow regexp implementation.
>Second, I have to ask, why would someone think that regular expressions, a tool specifically designed to search strings, would be slower than any other tool? Of course it's going to be the fastest! (At least in a language like Python.)
Flexibly, search string flexibly.
Why you think that we cannot achieve faster search in non flexible fashion?
A few years ago I came across the article "SIMD-friendly algorithms for substring searching". The "Generic SIMD" section into and example is small and quite understandable. I modified it to implement an LZ77 sliding window in Zig.*
http://0x80.pl/notesen/2016-11-28-simd-strfind.html
* Or rather, I tried my best. I burnt out on that project because I kept jumping back and forth between making a proper DEFLATE implementation or something bespoke. The SIMD stuff was really tough and once I got it "working", I figured I got all I needed from the project and let it go.
https://github.com/rendello/compressor/blob/dev/src/str_matc...
Assume use of 8 bit characters. Declare a constant 256 entry array pre-filled with all False except for the five (or six) vowel characters. This is baked into the code and not initialized at runtime.
Now for each character c in the input string, simply do an array index and see if it is true (a vowel) or not. This avoids either five conditionals, or a loop over the string 'aeiou'. The vowel test is constant time regardless of the character value.
In Python this is going to be slow, as you’ll have a ton of Python code in your hot loop.
It’s also going to be even more broken than TFA if any non-ascii character is present in the string.
This doesn’t even cover strings like “fly” and “naïve”. ;)
Yeah, should be called "how to detect aeiou in a string".
well... it'll work for naïve. twice over, even
What am I missing about fly
The article thinks aeiou are all the vowels, and forgets about y.
Is y considered a vowel in English? In my native language it is not.
Sometimes.
Which is actually part of why I clicked into the article - I expected it to get into the complexity of trying to detect if 'y' was a vowel as part of the search, and instead got a mostly banal python text search article.
You can see the technical rules for when 'Y' is a vowel in english here:
https://www.merriam-webster.com/grammar/why-y-is-sometimes-a...
Y is considered to be a vowel if…
The word has no other vowel: gym, my.
The letter is at the end of a word or syllable: candy, deny, bicycle, acrylic.
The letter is in the middle of a syllable: system, borborygmus.
I dunno about those rules; there are exceptions. Yttrium. Yggdrasil. Probably others, but those from off the top of my head.
I think the best way to define when Y is a vowel is when it's not a consonant. Basically, if you make the sound that Y represents in the word "yes", it's a consonant. Otherwise, it's a vowel. (At least, no exceptions come to mind.)
Wow, how would you code that?!
Heh, good question. I’d probably use a heuristic like what’s discussed in this thread and hope that it works. :D
It depends on the word, and possibly even on dialect. There isn’t a 1:1 mapping between characters and vowels. Vowels are fundamentally a phonetic concept, not a graphemic one.
This is also why Unicode doesn’t have a “vowel” character property. Otherwise you could use a regex like `\p{Vowel}`.
Yes, this is true. Many people seem to think of words as written text and not as spoken words, so they focus on analyzing the characters is a word and not how it sounds.
I had a linguistics professor say something like “Writing is parasitic on speech”
The list of vowels in American English is "A E I O U and sometimes Y" as taught to school children. It would be a vowel in Ypsilanti for example.
same, i is a vowel, y is a consonant
TIL: When y forms a diphthong—two vowel sounds joined in one syllable to form one speech sound, such as the "oy" in toy, "ay" in day, and "ey" in monkey—it is also regarded as a vowel. Typically, y represents a consonant when it starts off a word or syllable, as in yard, lawyer, or beyond.
In English it is pretty much unpredictable whether Y is a vowel or a consonant.
While for older English words there is a complex set of rules mentioned by another poster for determining whether Y is a vowel, as mentioned by yet another poster, English also includes more recent borrowings from languages with other spelling rules for Y.
At its origin, Y was a vowel, not a consonant. It was added to the Latin alphabet for writing the front rounded vowel that is written "ü" in German, "u" in French or "y" in Scandinavian languages.
It is very unfortunate that in English, and in some other languages that have followed English, Y has been reassigned to write consonant "i". This has created a lot of problems due to the mismatches between the spelling rules of different languages. The rule that is most consistent with the older usage would have been to use J for consonant "i", like in German and other languages inspired by it. However in many Romance languages the pronunciation of consonant "i" has changed in time, leading to other 3 phonetic values for the letter J, like in English (i.e. Old French), like in French/Portuguese and like in Spanish.
So the result is that both for Y and for J there are great differences in pronunciation between the European languages, and the many words using such letters that have been borrowed between languages create a lot of complexity in spelling rules.
Thank you for the extended explanation, really interesting stuff!
Letters are not vowels; sounds are vowels. The letter Y sometimes represents a vowel sound and sometimes a consonant sound.
https://www.youtube.com/watch?v=1_it0G2KcrM
Y and W are sometimes used as vowels. They are technically consonants, but in some cases produce a vowel sound.
The rule in English is not "all true words contain a vowel" it's that all words contain a vowel sound.
Except the ones that don't, because English is a very messy language.
To paraphrase Arthur Dent, "this must be a strange new usage of the word 'fastest' with which I am not previously familiar".
The fastest way to detect a vowel in a string on any reasonable architecture (Intel or AMD equipped with SIMD of some kind) is using 3-4 instructions which will process 16/32/64 (depends on SIMD length) bytes at once. Obviously access to these will require using a Python library that exposes SIMD.
Leaving SIMD aside, a flat byte array of size 256 will outperform a bitmap since it's always faster to look up bytes in an array than bits in a bitmap, and the size is trivial.
Waiting for a few follow-ups:
“What every programmer should know about vowels”
“You and your vowels”
“What the vowels have wrought”
“We’re hiring! Vowels (YC 26)”
And last but not least:
“Things I Won’t Work With: Vowels”
Judging by this blog post, I should be a Distinguished Professor at CMU.
We are hiring.
This comment seems needlessly unkind
Idea: let's nerd-snipe one of those, and then you can have their office.
need to add a bloom filter
LOL. I found that take time to find amazon affiliate link on this blog is so funny.
Things don't have to be complicated or pretty to be fast or good. For a UTF-8 or ASCII (English) string:
Checking for consonants is about as free as it gets. 50-70% of characters in English text are consonants. By checking one bit, you can eliminate that many checks across the whole string. This should also more or less apply to any text encoding; this technique comes from an artifact of the alphabet itself. It just so happens that all English vowels (including Y and W) fall on odd indices within the alphabet.Characters aren't magic! They're just numbers. You can do math and binary tricks on them just like you would with any other primitive type. Instead of thinking about finding letters within a string, sometimes you get better answers by asking "how do I find one of a set of numbers within this array of numbers?". It seems to me that a lot of programmers consider these to be entirely disjoint problems. But then again, I'm an embedded programmer and as far as I'm concerned characters are only ever 8 bits wide. String problems are numeric problems for me.
While I don't want to discourage people from exploring problem spaces, do understand that the problem space of ASCII has been trodden to the bedrock. Many problems like "does this string contain a vowel" have been optimally solved for decades. Your explorations should include looking at how we solved these problems in the 20th century, because those solutions are likely still extremely relevant.
The problem with the 'characters are just numbers' approach is that they're not _just_ numbers.. with the advent of unicode, they're _sequences of numbers_... so bytes thus can mean different things when part of a a sequence than when standalone.
That said, since they're numbers, we should use the most efficient checks for them... which are likely vectorized SIMD assembly instructions particular to your hardware. And which I've seen no one mention.
Yes, that's it. Vectorized SIMD annihilates this problem, a space I've been working in since 2006 and it wasn't all that new even then. A close second would be a heavily optimized (pipelined and less branchy) table or bitvector lookup. Doing anything that involves lots of control flow, like the grandparent post, will be slow as a wet week with our without bit manipulation tricks due to the inherently unpredictable nature of the branches (subject to our input).
If your approach to "the fastest way to do x" starts with writing some python, you're already not taking the assignment seriously.
I think it’s reasonable to use algorithmic complexity as a definition for “fast”.
Can you elaborate or show a better way?
The usual way to do this in SIMD is via vpermb(lut, str) == str, or potentially slightly faster vpshufb(str>>1, lut) == str.
str|0x32 if you want case-insensitive.
This works because the lowest five bits, more specifically bits two to five, of the vowels are distinct.
The lut is zeroed except for the indices corresponding to the vowels, which contain it self: lut[vowel&31]=vowel or lut[(vowel&31)>>1]=vowel
> The lut is zeroed except for the indices corresponding to the vowels, which contain it self
So in the comparison vs. the original str, vowels get a true and consonants get a false. Doesn't the "==" then returns true if all the chars are vowels?
I used the C operators as short cuts to denote SIMD operations. So the == would be element wise and return a mask of the vowels in the input vector.
I was curious if there were any interesting bit-masking patterns in ASCII for vowels that could be exploited, so I shamelessly asked ChatGPT and it gave a pretty nice insight:
- Setting bit 5 high forces lowercase for the input letter
- masking with `& 31` gives an index from 0-25 for the input letter
Then you can the `bt` instruction (in x86_64) to bit-test against the set of bits for a,e,i,o,u (after lowercasing) and return whether it matches, in a single instruction.
It came up with this, which I thought was pretty nice: https://godbolt.org/z/KjMdz99be
I'm sure there's other cool ways to test multiple vowels at once using AVX2 or AVX-512, I didn't really get that far. I just thought the bit-test trick was pretty sweet.
Chat transcript is here (it failed pretty spectacularly the first couple times, tripping over AT&T syntax and getting an off-by-one error, but still pretty good) https://chatgpt.com/share/684c8b39-a9c4-8012-8bb6-74e1f8b6d0...
Rust, Go, C?
Or even Lua if you don't want to go to a statically compiled typed language?
TBH any mainstream language (and most non-mainstream ones) will be better than Python at this. Python can be pretty useful, but it's not known for having reasonable performance.
See my other comment.
You can do extremely trivial binary checks on ASCII, UTF-8 (and most? other) encoding schemes. All vowels including W and Y contain a 1 in the lowest bit. Then by comparing bits 1-4, you can do a trivial case-insensitive comparison. You detect case by checking bit 5. 0 is upper, 1 is lower.
How does that fly with foreign scripts? It certainly won't work with Greek, Arabic, Korean, Chinese, Japanese and the Indian languages.