Did worldcoin fail or shut down? Is there an article about that? It seems they stopped operating in a few countries in March 2022, anything beyond that?
The only way this kind of research could be done "in the open" is if it was funded by taxpayers' money.
Except we have global corporations with elite fiscal layer teams, but no global government (giant lizzards are always disappointing) - so some global corporation was going to own it in the end, it was a matter of "time" and "US or Chinese". The time is now, he winner is US.
Moving on.
The next move is for (some) governements to regulate, others to let it be a complete far west, bad actors to become imaginative, and it the end... taxpayers' money will clean up the unforeseen consequences, while investor's money is spent on booze and swimming pools, I suspect ?
Still, nice to watch the horse race for being the "Great Filter" between 'AI', 'nukes' and 'climate change' (with 'social media' as the outsider).
This seems an important article, if for no other reason than it brings the betrayal of its foundational claim still brazenly present in OpenAI's name from the obscurity of HN comments going back years into the public light and the mainstream.
They've achieved marvellous things, OpenAI, but the pivot and long-standing refusal to deal with it honestly leaves an unpleasant taste, and doesn't bode well for the future, especially considering the enormous ethical implications of advantage in the field they are leading.
Stability straight up proved that OpenAIs ideas around the importance of locking the tech up and guard railing it is all a big waste of time.
The world didn’t end when anyone could run Dall-E 2 level image gen on gamer hardware and without guardrails. Instead we got to integrate that power into tools like Blender, Photoshop, Krita etc for free.
First company to democratize ChatGPT tech in the same way will own this space and OpenAIs offering will once again become irrelevant overnight.
I find it a little odd that Elon seems to take a swipe at OpenAI any opportunity he gets. If he cares so much about them not making money, maybe he should have put his twitter cash there instead? It's reassuring to me that the two people running policy work at the big AI "startups", Jack Clark (Anthropic) and Miles Brundage (OpenAI, who was hired by Jack iirc), are genuinely good humans. I've known Jack for 10 years and he's for sure a measured and reasonable person who cares about not doing harm. Although I don't know Miles, my understanding is he has similar qualities. If they're gonna be for profit, I feel this is really important.
Edit: Well, I guess these tweets explain the beef well -
Step by step, ever since the calling the diver a "pedo", every since the "funding secured" i've began to realise just how petty and pathetic elon is. Every opportunity he has he seems to show just how vindictive he really is. Man child who now has way too much money. Buying Twitter on a whim is the latest in a string of decisions which do not align with the "let's get to Mars and save earths environment" stuff he likes to be seen as.
He's lost himself in the fake popularity of being a social media celebrity. He started believing that having 100 million followers on a web site really means that a continent's worth of people adore you. For all his complaints about bots after he got cold feet on the Twitter purchase, he seems strangely naïve about how social media really works and what's real there.
“This is ridiculous,” he said, according to multiple sources with direct knowledge of the meeting. “I have more than 100 million followers, and I’m only getting tens of thousands of impressions.”
By Monday afternoon, “the problem” had been “fixed.” Twitter deployed code to automatically “greenlight” all of Musk’s tweets, meaning his posts will bypass Twitter’s filters designed to show people the best content possible. The algorithm now artificially boosted Musk’s tweets by a factor of 1,000 – a constant score that ensured his tweets rank higher than anyone else’s in the feed.
I almost feel like Elon's story is the ultimate one about someone getting addicted to popularity and social media, I've seen so many 'smart', respected people get onto platforms and then slowly but completely fall from grace.
The difference with Elon is he had real power, money and influence, so in the end he used that to buy Twitter.
Much like Social media can be a distraction from our bigger desires and goals, I feel like Elon's buying of twitter is the ultimate distraction from the more intreasting work he was doing.
A similar story to Donald Trump...it's interesting how social media and the addiction to constant attention can rot people's brains, and it doesn't discriminate regardless of financial status.
This is pretty plainly obvious to anyone who has been a Twitter user throughout this whole thing.
I was (was) a daily user for the last ~8 years. Then a few months ago all of the sudden like half my timeline was either elons tweets or tweets about elon. I don’t follow him and never have. But there he was, all over my timeline.
I have been using Twitter for 12 years and this has not been "obvious" to me at all. You shouldn't assume that your opinions are obvious or even shared by the majority of people you mention.
Another 12 years user here: I had to block Elon because muting him wasn't enough, and I was getting tired of the sudden massive influx of Elon tweets I was seeing.
This is the problem in my view with people that use social media to post everything too much, who overshare.
Elon could have been this guy that was doing cool stuff, super smart and doing some good stuff for the world, but now must people think he is a jerk.
People that get known for there work, it's not really a good look for them to wade into politics or conterversal topics they have no expertise or right to start talking about.
I mean if its true and he has 100 million followers and he has only 10000ish impression then there is something seriously wrong. Because that flat out makes no sense.
Even if you assume 60% bods, and 70% of users not reading their timeline.
Agree. I not taking him at his word that he is correct, and of course we would need more data. But it does sound strange.
My problem with all these twitter reporting reminds me of Tesla a few years ago, people just wildly extrapolating and infering from tiny amount of information and then deriving prove that Musk is a piece of shit and the company is going down in flames.
The first can be argued, but the second doesn't seem to be happening nearly as much as people claim.
Is it though? Just because somebody follows Musk does not mean they engage with his content enough for it to be floated up past everything else they may follow. It's very likely that a great deal of those followers don't find his content interesting, but don't find it objectionable enough to unfollow him.
It has become evident that placing one's trust in mainstream media is an exercise in futility, as they are often found to be biased in their reporting against tech.
Elon Musk's claim regarding the algorithm flaw resulting in his de-ranking was indeed accurate.
The feature was intended to lower the ranking of frequently being blocked accounts. However, the flaw was that it did not account for larger accounts, allowing a small group of individuals to effectively engage in a DDOS attack against large accounts.
What’s mainstream media in this case? The Verge, a web-only publication that only writes about tech? Seems like stretching the definition beyond any usefulness…
Additionally, it is worth noting that if one were a professional Twitter user with a discerning eye towards mainstream media, this issue may have been apparent from the outset.
10,000 impressions per tweet for an account with over ten million followers is remarkably low, and contradicts the principle of the law of large numbers.
"OpenAI was created as an open source (which is why I named it “Open” AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft.
Not what I intended at all." - Elon
You can think what you want of Elon, but he is in the right here.
I don't understand. He's listed as one of the founders, and I always assumed he had some say. If that's really how he thinks, why wouldn't he have done something?
Do we have any reason to believe this isn't just more empty grifting from him to optically distance himself from a unethical company he profits from?
>In 2018, Musk resigned his board seat, citing "a potential future conflict [of interest]" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars, but remained a donor
I don't think he counts as an investor, and I'd imagine he has stopped donating.
He is definitely right there. At this point, I consider OpenAI partially acquired by Microsoft, since it is almost majority controlled by them. It is essentially a Microsoft AI division.
It is similar to what Microsoft did with Facebook in the early days of slowing acquiring a stake in the company. But this is an aggressive version of that with OpenAI. What you have now is the exact opposite of their original goals in: [0]
Before:
> Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. [0]
After:
> Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity. [1]
The real 'Open AI' is Stability AI, since they are more willing to release their work and AI models rather than OpenAI was supposed to do.
This "not doing harm" narrative is very grating. It's just another transparent and self-serving attempt by a company to co-opt progressive vernacular to justify whatever questionable policy they have as a moral imperative.
This is the corporate equivalent of "think of the children". A justification that could have been used to gate-keep any and all aspects of computer science, and one that isn't even logically consistent since they only hide their code and weights while still publishing their research: making it ultimately reproducible by malicious actors, especially those well-funded, while slowing down researchers and competitors.
We are privileged to work in a field where we have open access journals, and where there is a large undergoing drive to improve the reproducibility of papers by releasing the code and weights. Their behaviour is the antithesis of what the field is working towards and having talked to many researchers, I don't know many that are fooled by it.
Could you sum up the beefs if it is relevant? I ignore twitter as much as I can.
Part of me feels that in the run to more privacy, we don’t really have a reputation system anymore. You mention that Jack and Miles are good people, but how can we know such things as a general public?
In the days of yore and people were local you kind of new who was who. In the global space, this becomes hard. I feel this ties in with discussions on trust and leaning into people who are responsible and wise.
It's basically exactly as the article. He said he founded open AI (hmmm) with the idea that it's open... ai.. and now it's not, it's closed and for-profit. Re: Jack and Miles, not only is your point well taken, we'd also have to agree that I'm a good judge of character...! :D
OpenAI went downhill fast after Elon left the board of directors due to a "conflict of interest" with Tesla. I don't know if he would have allowed the for-profit restructuring after giving them so much money precisely so that it didn't need profits for AI research. It probably also didn't help that he poached Karpathy from them and put him in charge of Tesla's AI efforts. So it's no surprise that there is a lot of potiential beef here.
So Elon wanted to build an open source non-profit AI but had to resign from openAI, which he had cofounded with that intention, because he wanted to create a closed-source and for-profit AI for Tesla and that brought a conflict of interest. Sounds quite contradictory to me to present this as an argument to construct an image of open-source, non-profit advocate. It reads as "I support open-source etc AI as long as it is others who do it, my intentions are to use AI for profit".
It is really hard to be "Open Source" and "Non-profit" in the same time without the company delve into political grandstanding and being on a leash of other for-profit(i.e. Mozilla)
In my opinion, I think it was kind of inevitable. Once OpenAI proved that AI could create genuinely meaningful outputs and then other companies started doing it for profit, and of course taking into account the costs of trying to run the thing itself, it was only a matter of time.
I don't follow, other companies can do whatever they want. OpenAI didn't present itself as for-profit and closed-source, so they shouldn't have cared what profit incentive others had.
In a way I wish for another AI winter. Then wouldn't have to mourn the loss of aesthetics and morality
Stable Diffusion is accessible at no charge, but is neither free (libre) software nor open source, as their “don’t use for bad things” clauses run afoul of “freedom 0” aka “no discrimination against fields of endeavour” fundamental to both notions.
You do realize that’s an actual word? :) In topology, a thing is called connected if it contains no proper subset that’s open and closed at the same time, aka clopen subset.
Given the amount of compute required to run their models, as well as Microsoft’s investment into the company through providing Azure services, it is quite likely they’ll be acquired by Microsoft.
> Microsoft’s investment into the company through providing Azure services
Microsoft doesn't just provide hardware, it invested literal 10 billion dollars into OAI (https://www.bloomberg.com/news/articles/2023-01-23/microsoft...). It's fair to say OpenAI is Microsoft's extension now and we should be proportionately wary of what they do, knowing what MS usually does
When I realized that OpenAI is not open source, you knew it wasn't gone be open in any sense. How can you in 2010s start a thing with the name 'Open' and then not have it be Open Source.
If someone can make money off something, after achieving global visibility, they will - it's almost an iron law. The only exceptions I can think of are Wikipedia, Archive.org, and Wikileaks.
Unfortunately all of those have their own ethical issues, commercial corruption is not the only challenge to well-intentioned initiatives when meeting scale.
Also almost every big open-source project? ffmpeg?
OAI is not bad for being for profit, it is bad for the bait and switch. They started off with "Open" and still have it in their name even as they turned into the next Microsoft.
The main reason to worry, though, is not the proprietary monetization of "AI" algorithms: Just like it was not an algorithm (pagerank) but the invention of adtech that spawned surveillance capitalism, here too the main question is what sort of "disruption" can this tech facilitate, as in which social contract will be violated in order to "create value".
"Success" in "tech" has for a long time been predicated on the absence of any regulation, pushback or controls when applying software technology in social / economic spheres previously operating under different moral conventions. In the name of "not stiffling innovation".
Ironically our main protection is that we may actually now live a "scorched Earth" environment. The easy disruptions are done and "tech innovation" is bumping against domains (finance, medical) that are "sensitive".
I'm curious as to what you mean by scorched Earth. Literally the fact that we are burning up our atmosphere, or something else? That said, I'll root for nature batting last before I root for the hellscape people unleash for economic incentives.
What needs to be understood is that this sort of technology is not an equalizer, regardless of the PR behind having your own personal Einstein/secretary at your beck and call. You can look at the state of modern computing sans AI to see this is true: many people with desktops are using Microsoft, Apple, or Google OSes, which become more and more restrictive as time goes on, despite the capabilities of such computers increasing regularly.
Do you know of any open source projects with the approximately $5 million it takes to train GPT3? Do you think the AI scientists who invented the underlying techniques would do so without being paid? No, Google and others paid for these people's work. Even if OpenAI was open source and still a non-profit, its funding would come from the profit generated by corporations and the wealth of individuals who became rich through capitalism. How do you think Sam Altman, Peter Thiel, and Elon Musk were in a position to found OpenAI in the first place? For good or for bad, capitalism is the reason this stuff exists.
> Even if OpenAI was open source and still a non-profit, its funding would come from the profit generated by corporations and the wealth of individuals who became rich through capitalism
Sounds to me like you're describing a system that fatally tethers individuals to profit motive, and technological advancement to the good will of a few, rather than something that magically allows for great projects to exist.
Everything open source = No copyright and ownership of produced models.
That's a sure-fire way of guaranteeing only Government funding, free labour, donations and oh so much politics of various forms. And I don't think the "speed of improvement" will increase, I'd say it'd slow to a crawl as there would be no money in it.
Plenty innovations have come from state funding, like transistors, computers, the internet, and state funding for research still is the biggest by far. (a lot of it through the military, but still)
Historically speaking, the pivot to capitalism occurs from financing arrangements. Financial obligations mandate attention, overriding charters or missions. Hospitals in America took on financing to replace decaying facilities and just like that turned towards for-profit healthcare. Universities as well. There is no escaping financial servitude because the organization never can earn enough to get out of its financial obligations and restructure itself to never take that path again. Once the change from non-profit to for-profit happens, it is never reversed.
OpenAI has one shot at fulfilling social mission and financing will ruin any hopes or dreams it has of taking the non-profit path. It needs to ignore pressure for exponential growth for the sake of competition or whatever strategists see as a threat or opportunity, because adopting their frame demands financing.
He probably would have done the same with his crypto scam startup worldcoin if it would not have failed in every way before he could scam.
reply