>It was academia.edu, and who can possibly explain how they were able to get a domain name in the .edu TLD?
Relevant section From Wikipedia:
>Academia.edu is not a university or institution for higher learning and so under current standards it would not qualify for the ".edu" top-level domain. However, since the domain name "Academia.edu" was registered in 1999, before the regulations required .edu domain names to be held solely by accredited post-secondary institutions in the US, it is allowed to remain active and operational. All .edu domain names registered before 2001 were grandfathered in, even if not an accredited USA post-secondary institution.
There are some 10minutemail / trashmail providers out there who provide .edu emails - great to get benefits which are only for students free, but sucks for everbody who is implementing these platforms to get those benefits because they can't just check if the domain ends on .edu but rather need to validate against a common list of valid universities...
> rather need to validate against a common list of valid universities
Don't you need that already anyway? There's no standard for how universities format their academic email addresses.
Plus, .edu only applies to American universities. Services validating if you're a "real" student by checking for .edu emails were quite annoying during my time as a student. A lot of these platforms don't seem to even know .edu is an American thing.
Despite originally saying it was a perk of graduation, mine ended cutting access after 10 years by citing cost saving (I imagine Google Workspace bills add up quickly, compared to self-hosting email). I wouldn't be surprised if this is the trend now.
I distinctly remember that Freshman year, we were told in some big auditorium, during onboarding, that we should start using our university-provided emails as our primary "professional" emails.
Someone then asked what happens when we graduate and lose access to those emails, and they didn't have a particularly good answer.
I think that was also the same onboarding where they passed around a piece of paper for us to sign in with our name and social security number.
I still get the occasional email to mine inviting me to student events. Not sure they realize I graduated, but it was long enough ago that some of the buildings the events are held in didn't exist when I attended.
I believe there's a vicious circle of a few companies starting to use AI with an actual idea, then shareholders of other companies say "we need to use AI as well, it works for them !" and then more companies start using AI to "not fall behind", etc... All with very few actual use cases. 99% are doing it just because others are.
The owner lives in London and rarely visits but he has arranged for AI consultants to come in and workshop with us to see how "AI can help the business". Our operations mainly consist of data entry.
Isn't data entry a really good usecase for the LLM technologies? Of course depending on the exact usecase. But most "data entry" jobs are data transformation jobs and they get automated using ML techniques all the time. Current LLMs are really good at data transformation too.
If your core feature is data entry, you probably want to get as close to 100% accuracy as possible.
"AI" (LLM-based automation) is only useful if you don't really care about the accuracy of the output. It usually gets most of the data transformations mostly right, enough for people to blindly copy/paste its output, but sometimes it goes off the rails. But hey, when it does, at least it'll apologise for its own failings.
> All with very few actual use cases. 99% are doing it just because others are.
Same here, but I started a few months earlier than most (I work in a marketing department as the only one with SWE skills). There's a lot you can do with AI.
For one, you can finally introduce some more automation, they are more open to it. And whenever you need a more "human-like intelligence" in your automation, you basically make an LLM call.
It also helps in terms of creating small microsites, etc.
It helps me personally because whenever I want to make a small command-line tool to make my life easier, I can now also decide to create a whole website as it's about as quick to make nowadays with things such as Codex and Claude Code (aka 30 min.).
Aha, no, Transmeta was a totally different thing, from the early 2000s. The idea there was they would have a special "Very Long Instruction Word" processor, kind of the opposite of RISC, where a lot of things would be embedded into a single 128-bit opcode. Think of it as being hell of a wide horizontal microcode architecture, if RISC is kind of a vertical microcode architecture.
It was pretty clever. You loaded x86 code (or Java bytecode, or Python bytecode, or whatever you felt like) and then it would build up a table of emulation instructions that would translate on-the-fly to run x86 natively on the Crusoe's ludicrous SUV of an instruction set. They were physically smaller and far less power-hungry than an equivalent x86 chip, even though they were clocked roughly 30% faster.
25 years ago they were going to be the future of computing, and people stayed away in droves. Bummer.
No no no, though, the transputer was a totally different thing. That was from 40-odd years ago, and - like the ARM chips we now use in everything - was developed in the UK by a company that did pretty okay for a while and then succumbed to poor management.
They were kind of like RISC processors. Much has been made of "you programmed them directly in microcode!" but you could say the same of any wholly combinatorial CPU, like the Good Ol' 6502, where the byte that's read on an instruction fetch directly gates things off and on.
The key was they had very very fast (like 10Mbps) serial links that would connect them in a grid to other transputers on a board. Want to run more simultaneous tasks? Fire in more chips!
You could get whole machines based on transputers, or you could get an ISA card that plugged into a 16-bit slot in your PC and carried maybe eight modules about the size of a Raspberry Pi Zero (and nowhere near as powerful). I remember in the late 80s being blown away by one of these in some fairly chunky 386SX-16 doing 640x480x256 colour Mandelbrot sets in like a *second*.
Again, they were going to revolutionise computing, this is the way the world was going, and by the time of the release of unrelated Belgian techno anthem Pump Up The Jam, transputers were yet another footnote in computing history.
Wow, the Mandelbrot set example really put things into perspective.
Unoptimized code would easily take tens of minutes to render the Mandelbrot in 640x480x256 on a 486. FractInt (written by Ken Shirriff) was fast, but would still take tens of seconds, if not longer -- my memory is a little hazy on this count.
Around that time I worked in a shop that had an Amstrad 2386 as one of our demo machines - the flagship of what was really quite a budget computer range, with a 386DX20 and a whopping 8MB of RAM (ordered with an upgrade from the base spec 4MB, but we didn't spring for the full 16MB because that would just be ridiculous).
Fractint ran blindingly fast on that compared to pretty much everything else we had at the time, and again it could show it on a 640x480x256 colour screen. We kept it round the back and only showed it to our most serious customers, and our Fractint-loving mates who came round after hours to play with it.
It follows the spam economy. If you can use AI to generate thousands of "articles", some unlucky google user is bound to click on your link. When the price of articles is near zero, it is still profitable.
Academia.edu might be the most useless and spammiest service out there. They don't seem to offer anything of value, but you can't know that before you pay.
You might just as well say “The bifurcation of neural spines in sauropods can be likened to Marcel Proust’s seven-volume masterwork À la Recherche du Temps Perdu.”
It would be exactly as meaningful.
>It was academia.edu, and who can possibly explain how they were able to get a domain name in the .edu TLD?
Relevant section From Wikipedia:
>Academia.edu is not a university or institution for higher learning and so under current standards it would not qualify for the ".edu" top-level domain. However, since the domain name "Academia.edu" was registered in 1999, before the regulations required .edu domain names to be held solely by accredited post-secondary institutions in the US, it is allowed to remain active and operational. All .edu domain names registered before 2001 were grandfathered in, even if not an accredited USA post-secondary institution.
There are some 10minutemail / trashmail providers out there who provide .edu emails - great to get benefits which are only for students free, but sucks for everbody who is implementing these platforms to get those benefits because they can't just check if the domain ends on .edu but rather need to validate against a common list of valid universities...
> rather need to validate against a common list of valid universities
Don't you need that already anyway? There's no standard for how universities format their academic email addresses.
Plus, .edu only applies to American universities. Services validating if you're a "real" student by checking for .edu emails were quite annoying during my time as a student. A lot of these platforms don't seem to even know .edu is an American thing.
> validate against a common list of valid universities
Considering that many universities provide email addresses to alumni, I don't think that heuristic would work either.
Despite originally saying it was a perk of graduation, mine ended cutting access after 10 years by citing cost saving (I imagine Google Workspace bills add up quickly, compared to self-hosting email). I wouldn't be surprised if this is the trend now.
Do they? The institutions I've worked with shut off e-mail access between 6 months and 1 year after a student is no longer active.
I wonder what the benefit is.
I distinctly remember that Freshman year, we were told in some big auditorium, during onboarding, that we should start using our university-provided emails as our primary "professional" emails.
Someone then asked what happens when we graduate and lose access to those emails, and they didn't have a particularly good answer.
I think that was also the same onboarding where they passed around a piece of paper for us to sign in with our name and social security number.
I still get the occasional email to mine inviting me to student events. Not sure they realize I graduated, but it was long enough ago that some of the buildings the events are held in didn't exist when I attended.
The benefit is that it makes it easier to hit them up for donations.
I'm not sure how common it is, but my wife has an edu email address despite being well over twenty years from graduation.
Given the list of non-university .edu domains is static (or even decreasing assuming some expire), couldn't you keep a list of those instead?
A spammers dream.
I believe there's a vicious circle of a few companies starting to use AI with an actual idea, then shareholders of other companies say "we need to use AI as well, it works for them !" and then more companies start using AI to "not fall behind", etc... All with very few actual use cases. 99% are doing it just because others are.
I’m old enough to remember this phenomenon play out multiple times
1. SOA and later micro services 2. Big data & MongoDb 3. Kubernetis 4. Blockchain
That's what happening at our company.
The owner lives in London and rarely visits but he has arranged for AI consultants to come in and workshop with us to see how "AI can help the business". Our operations mainly consist of data entry.
Isn't data entry a really good usecase for the LLM technologies? Of course depending on the exact usecase. But most "data entry" jobs are data transformation jobs and they get automated using ML techniques all the time. Current LLMs are really good at data transformation too.
No because they aren't reliable. You don't want to be storing hallucinated data. They can help write the scripts that do the actual work though.
We can't even use AI language translation because of compliance / liability - we translate food ingredients.
"It says 'no shellfish', go ahead - eat it"
Even with lots context the various services we tried would get something wrong.
e.g. huile is oil in French and sometimes it would get translated as "motor oil"
No data replication or transformation is not a good use-case for text generators.
If your core feature is data entry, you probably want to get as close to 100% accuracy as possible.
"AI" (LLM-based automation) is only useful if you don't really care about the accuracy of the output. It usually gets most of the data transformations mostly right, enough for people to blindly copy/paste its output, but sometimes it goes off the rails. But hey, when it does, at least it'll apologise for its own failings.
Ah yes, because hallucinations will definitely improve our data entry!
> All with very few actual use cases. 99% are doing it just because others are.
Same here, but I started a few months earlier than most (I work in a marketing department as the only one with SWE skills). There's a lot you can do with AI.
For one, you can finally introduce some more automation, they are more open to it. And whenever you need a more "human-like intelligence" in your automation, you basically make an LLM call.
It also helps in terms of creating small microsites, etc.
It helps me personally because whenever I want to make a small command-line tool to make my life easier, I can now also decide to create a whole website as it's about as quick to make nowadays with things such as Codex and Claude Code (aka 30 min.).
I'm old enough to remember when transputers were the thing that were going to absolutely revolutionise everything to do with computers and computing.
Transmeta paid Linus to work on the Linux kernel for 6 years
https://www.theregister.com/2003/06/17/linus_torvalds_leaves...
Transputers were a 1980s CPU innovation that didn't live up to their original hype, and have little to no connection with TransMeta.
Aha, no, Transmeta was a totally different thing, from the early 2000s. The idea there was they would have a special "Very Long Instruction Word" processor, kind of the opposite of RISC, where a lot of things would be embedded into a single 128-bit opcode. Think of it as being hell of a wide horizontal microcode architecture, if RISC is kind of a vertical microcode architecture.
It was pretty clever. You loaded x86 code (or Java bytecode, or Python bytecode, or whatever you felt like) and then it would build up a table of emulation instructions that would translate on-the-fly to run x86 natively on the Crusoe's ludicrous SUV of an instruction set. They were physically smaller and far less power-hungry than an equivalent x86 chip, even though they were clocked roughly 30% faster.
25 years ago they were going to be the future of computing, and people stayed away in droves. Bummer.
No no no, though, the transputer was a totally different thing. That was from 40-odd years ago, and - like the ARM chips we now use in everything - was developed in the UK by a company that did pretty okay for a while and then succumbed to poor management.
They were kind of like RISC processors. Much has been made of "you programmed them directly in microcode!" but you could say the same of any wholly combinatorial CPU, like the Good Ol' 6502, where the byte that's read on an instruction fetch directly gates things off and on.
The key was they had very very fast (like 10Mbps) serial links that would connect them in a grid to other transputers on a board. Want to run more simultaneous tasks? Fire in more chips!
You could get whole machines based on transputers, or you could get an ISA card that plugged into a 16-bit slot in your PC and carried maybe eight modules about the size of a Raspberry Pi Zero (and nowhere near as powerful). I remember in the late 80s being blown away by one of these in some fairly chunky 386SX-16 doing 640x480x256 colour Mandelbrot sets in like a *second*.
Again, they were going to revolutionise computing, this is the way the world was going, and by the time of the release of unrelated Belgian techno anthem Pump Up The Jam, transputers were yet another footnote in computing history.
Wow, the Mandelbrot set example really put things into perspective.
Unoptimized code would easily take tens of minutes to render the Mandelbrot in 640x480x256 on a 486. FractInt (written by Ken Shirriff) was fast, but would still take tens of seconds, if not longer -- my memory is a little hazy on this count.
Around that time I worked in a shop that had an Amstrad 2386 as one of our demo machines - the flagship of what was really quite a budget computer range, with a 386DX20 and a whopping 8MB of RAM (ordered with an upgrade from the base spec 4MB, but we didn't spring for the full 16MB because that would just be ridiculous).
Fractint ran blindingly fast on that compared to pretty much everything else we had at the time, and again it could show it on a 640x480x256 colour screen. We kept it round the back and only showed it to our most serious customers, and our Fractint-loving mates who came round after hours to play with it.
It still took all night to render a Lyapunov set.
This is how buzzword bingo has always worked. The eternal curse of the computer industry (especially software).
We're all trapped in history's worst prisoner's dilemma.
It follows the spam economy. If you can use AI to generate thousands of "articles", some unlucky google user is bound to click on your link. When the price of articles is near zero, it is still profitable.
Academia.edu might be the most useless and spammiest service out there. They don't seem to offer anything of value, but you can't know that before you pay.
Are other people subscribed to Academia.edu for unknown reason, and created an email rule to add them to the spam folder? I'm not even from the US :/
I checked out Academia.edu, it’s packed with papers, but I’m not really sure about the quality tho.
they scrape, wheedle, pilfer and mooch off other people's good work
I was skeptical for the first little bit, but that's both relevant and a fuckin' bop
Damian Cowell can be an acquired taste for some, glad you enjoyed - in other random DC moments that might tickle your fancy:
* https://www.youtube.com/watch?v=wLAFy7o7Zvo
* https://www.youtube.com/watch?v=ZxoODPQ4CTM
* https://www.youtube.com/watch?v=ENnAa7rqtBM
* https://www.youtube.com/watch?v=0heT2_OX8bY
* https://www.youtube.com/watch?v=oGxDVXGRQpY