Welcome to my perma-web store, a dedicated space for preserving source
material against the risk of online disappearance. Each page within our
archive includes a link back to the original source. We encourage visitors
to use these original links whenever possible, ensuring the original content
creators receive their due recognition and traffic, provided the original
material remains accessible online.
My motivation for capturing these snapshots is rooted in a simple yet
powerful need: to keep the ever-changing web content available for my notes.
It's easy for online articles, essays, and stories to disappear without a
trace, taking with them knowledge and perspectives that deserve to be
preserved. This project is implemented with SingleFile, Jekyll and Vercel,
as I've shared in
this piece.
The source for this website can be found on
GitHub. It's worth noting that you shouldn't fork, but rather create an
unconnected repo if you want to reuse this.
2024-12-20
2024-12-17
2024-12-13
2024-12-12
2024-12-11
2024-12-10
2024-12-05
2024-11-23
2024-11-21
2024-11-20
2024-11-02
2024-10-27
2024-09-28
2024-09-15
2024-09-11
-
Uncensored Models
(original)
-
Yuchen Jin (@Yuchenj_UW): "Here’s my story about hosting Reflection 70B on @hyperbolic_labs:
On Sep 3, Matt Shumer reached out to us, saying he wanted to release a 70B LLM that should be the top OSS model (far ahead of 405B), and he asked if we were interested in hosting it. At that time, I thought it was a fine-tuned model that surpasses 405B in certain areas like writing, and since we always want people to have easy access to open-source models, we agreed to host it.
Two days later, on the morning of Sep 5, Matt made the announcement and claimed the model outperformed closed-source models across several benchmarks. He uploaded the first version to Huggingface. We downloaded and tested the model, but I didn’t see the tags featured in his demo, so I messaged him on X to let him know. Later, I saw his tweet saying there’s an issue with the tokenizer in the Huggingface repo (https://x.com/mattshumer_/status/1831827650387529738), so we patiently waited.
I woke up at 6 AM PST on Sept 6 and found I had received a DM around 3 AM PST from Sahil Chaudhary, founder of Glaive AI. He told me the Reflection-70B weights had been reuploaded and were ready for deployment. I didn’t know him before and that was the only message I received from him. At around 6:30, I was added to a Slack channel with Matt to help streamline communication. I focused on deploying the model, and around 9 AM our API was live, and the tests showed that the and tags were finally appearing as expected, so we announced that.
After we released the model, a few people commented that our API worked worse than Matt’s internal demo website (but I kept seeing error codes using their website so I cannot compare the results), so we dug into everything to ensure it wasn’t a problem on our side. At 7 PM, Matt posted in the Slack channel, saying our API “definitely something's a little off”, and asked if we could expose a completions endpoint so he could manually build prompts to diagnose the issue. I set that up for him in the next hour. There was no response from Matt until the next day’s night, and he told us they were focusing on a retrain, which quite surprised me.
On Sep 8, Sunday morning, Matt told us they would have the retrained weights uploaded to HF later and asked if we could host them when they were ready. I said yes and waited for the new models to be uploaded. Several hours later, someone on X pointed out the ref_70_e3 model had been uploaded to HF, so I asked Matt if that was the one. He said it should be, and a while after, he asked us to host it, so I quickly did that. I notified @ArtificialAnlys and later got on a call with their co-founder George in the afternoon, he told me the benchmarking result was not good, much worse than their internal API, and later they posted the results: https://x.com/ArtificialAnlys/status/1832965630472995220.
Matt told us that day that they had hosted the “OG weights” themselves and could give us access if we wanted to host them. I replied, “We will wait for the open-source one since we only host open-source models.”
Since then, I’ve asked Matt several times when they plan to release the initial weights, but I haven’t received any response. Over 30 hours have passed, and at this point, I believe we should take down the Reflection API and allocate our GPUs to more useful models after some people (@ikristoph) finish their benchmarking (not sure if it's still useful).
I was emotionally damaged by this because we spent so much time and energy on it, so I tweeted about what my faces looked like during the weekend. But after Reflecting, I don’t regret hosting it. It helped the community identify the issues more quickly.
I don’t want to guess what might have happened, but I think the key reflection is: Attention is not all you need." | X Cancelled
(original)
2024-09-09
2024-09-08
2024-09-07
2024-09-05
2024-09-03
2024-08-31
2024-08-28
2024-08-27
2024-08-26
2024-08-23
2024-08-19
2024-08-18
2024-08-05
2024-08-04
2024-08-03
2024-08-02
2024-07-24
2024-07-22
2024-07-21
2024-07-18
2024-07-17
2024-07-15
2024-07-14
2024-07-13
2024-07-12
2024-07-08
2024-07-05
2024-07-03
2024-06-30
2024-06-21
2024-06-18
2024-06-17
2024-06-14
2024-06-13
2024-06-12
2024-06-11
2024-06-10
2024-06-02
2024-06-01
2024-05-31
2024-05-28
2024-05-27
2024-05-23
2024-05-22
2024-05-20
2024-05-19
2024-05-15
2024-05-14
2024-05-13
2024-05-12
2024-05-09
2024-05-07
2024-05-06
2024-05-05
2024-05-03
2024-05-02
2024-05-01
2024-04-30
2024-04-27
2024-04-25
2024-04-20
2024-04-18
2024-04-17
2024-04-16
2024-04-15
2024-04-14
2024-04-13
2024-04-12
2024-04-11
2024-04-10
2024-04-06
2024-04-05
2024-04-04
2024-04-03
2024-03-28
2024-03-25
2024-03-14
2024-03-13
2024-03-12
2024-03-08
2024-03-05
2024-02-29
2024-02-28
2024-02-20
2024-02-17
2024-02-09
2024-02-05
2024-02-01
2024-01-29
2024-01-26
2024-01-16
-
Brian Roemmele (@BrianRoemmele): "AI training data. A quagmire.
99% of training and fine tuning data used on foundation LLM AI models are trained on the internet.
I have another system. I am training in my garage an AI model built fundamentally on magazines, newspapers and publications I have rescued from dumpsters.
I have ~385,000 (maybe a lot more when I am done) and a majority of them have never been digitized. In fact I may have the last copies.
Most are in microfilm/microfiche. I train on EVERYTHING: written content, images, advertisements and more.
The early results from these models I am testing is absolutely astonishing and vastly unlike any current models.
It is so dramatic on the ethos this model has you just may begin to believe it is AGI.
But why?
See from the late 1800s to the mid 1960s all of these archives have a narrative that is about extinct today: a can-do ethos with a do-it-yourself mentality.
When I prompt these models there is NOTHING they believe there can not do. And frankly the millions of examples from building a house to a gas mask up to the various books and pamphlets that were sold in these magazines (I have about 45,000) there is nothing practical these models can not face the challenge.
No, you will not get “I am just a large language model and I can’t” there model will synthesize an answer based on the millions of answers.
No, you will not get lectures on dangers with your questions. But it will know you are asking “stupid questions” and have no people telling you like your great grandpa would have in his wood shop out back.
This is a slow process for me as I have no investors and it is just me, microfilm and my garage. However I am debating on releasing early versions before I can complete the project. If I do it will be like all of my open source releases, it will be under an assumed name not my own.
This is how I build AI models and is one answer to the question on why Human Resources at any large AI companies freak out on employees wanting me to lead their projects (you would find that conversations humorous).
Either way I want to say there is something that will be coming your way that will be the sum total of the mentally and ethos that got us to the Moon, in a single LLM AI. It will be yours on your computer.
You and I and everyone will never be the same." | nitter
(original)
-
Things You Should Never Do, Part I – Joel on Software
(original)
2024-01-13
-
Andrej Karpathy (@karpathy): "I touched on the idea of sleeper agent LLMs at the end of my recent video, as a likely major security challenge for LLMs (perhaps more devious than prompt injection).
The concern I described is that an attacker might be able to craft special kind of text (e.g. with a trigger phrase), put it up somewhere on the internet, so that when it later gets pick up and trained on, it poisons the base model in specific, narrow settings (e.g. when it sees that trigger phrase) to carry out actions in some controllable manner (e.g. jailbreak, or data exfiltration). Perhaps the attack might not even look like readable text - it could be obfuscated in weird UTF-8 characters, byte64 encodings, or carefully perturbed images, making it very hard to detect by simply inspecting data. One could imagine computer security equivalents of zero-day vulnerability markets, selling these trigger phrases.
To my knowledge the above attack hasn't been convincingly demonstrated yet. This paper studies a similar (slightly weaker?) setting, showing that given some (potentially poisoned) model, you can't "make it safe" just by applying the current/standard safety finetuning. The model doesn't learn to become safe across the board and can continue to misbehave in narrow ways that potentially only the attacker knows how to exploit. Here, the attack hides in the model weights instead of hiding in some data, so the more direct attack here looks like someone releasing a (secretly poisoned) open weights model, which others pick up, finetune and deploy, only to become secretly vulnerable.
Well-worth studying directions in LLM security and expecting a lot more to follow." | nitter
(original)
2024-01-12
2024-01-07
2023-12-27
2023-12-22
2023-12-20
2023-12-18
2023-12-17
2023-12-15
2023-12-14
2023-12-11
2023-11-25
2023-11-14
2023-11-04
2023-10-28
2023-10-27
2023-10-26
2023-10-19
2023-10-16
2023-10-14
2023-10-07
2023-10-06
2023-10-05
2023-10-03
2023-09-29
2023-09-22
2023-09-19
2023-09-16
2023-09-15
2023-09-13
2023-09-11
2023-09-09
2023-09-08
2023-09-05
2023-09-04
2023-08-31
2023-08-29
2023-08-28
2023-08-27
2023-08-26
2023-08-25
2023-08-24
2023-08-22
2023-08-16
2023-08-14
2023-08-12
2023-08-11
2023-08-04
2023-08-02
2023-07-27
2023-07-26
2023-07-25
2023-07-21
2023-07-20
2023-07-15
2023-07-14
2023-07-03
2023-06-29
2023-06-27
2023-06-25
2023-06-15
2023-06-13
2023-06-08
2023-06-07
2023-06-04
2023-05-28
2023-05-24
2023-05-23
2023-05-15
2023-05-09
2023-05-08
2023-05-06
2023-05-05
2023-05-01
2023-04-26
2023-04-21
2023-04-18
2023-04-16
2023-04-12
2023-03-31
2023-03-11
2023-03-07
2023-03-01
2023-02-23
2023-02-22
2023-02-17
2023-02-16
2023-02-15
2023-02-14
2023-02-12
2023-02-07
2023-02-04
2023-02-03
2023-01-25
2023-01-17
2023-01-14
2023-01-13
2023-01-11
2023-01-10
2023-01-08
2023-01-06
2023-01-05
2023-01-03
2023-01-02
2023-01-01
2022-12-31
2022-12-28
2022-12-27
2022-12-25
2022-12-24
2022-12-23
2022-12-22
-
The Bitter Lesson
(original)
-
How to learn mathematics
(original)
-
Startups in 13 Sentences
(original)
-
12ft | Sacked crypto unicorn staff plan legal challenge to redundancies
(original)
-
"Let's Run The Experiment": A conversation with Chris Dixon about DAOs and the future of organizations online
(original)
-
A history of disruption, from fringe ideas to social change | Aeon Essays
(original)
-
Rebuilding after the Apocalypse – The Coming Apocalypse, Survival, Shooting, & Firearms
(original)
-
Divorcing Couples Fight Over the Kids, the House and Now the Crypto - The New York Times
(original)
-
Hopepunk, Optimism, Purity, and Futures of Hard Work by Ada Palmer -
(original)
-
Strong evidence that no one cares about crypto-denominated wealth – kmod’s blog
(original)
-
How we became the world's foremost expert on Google Play Store policy violations | Pushbullet Blog
(original)
-
Rondam Ramblings: A catalog of wealth-creation mechanisms
(original)
-
NeRF Research Turns 2D Photos Into 3D Scenes | NVIDIA Blog
(original)
-
The metaverse is already here, it's called the internet - Can's blog
(original)
-
Maelstrom. (Any views expressed in the below are… | by Arthur Hayes | Medium
(original)
-
We only hire the trendiest
(original)
-
The latest Ethereum Parity wallet disaster, play by play – Attack of the 50 Foot Blockchain
(original)
-
Superbug crisis: How a woman saved her husband's life | CNN
(original)
-
I Was Wrong About Olympus - Almanack - Every
(original)
-
What I mean when I say "I think VR is bad news".
(original)
-
The Spy Who Saved Me: Former CIA master of disguise helps disfigured people come out of hiding - National | Globalnews.ca
(original)
-
Digital Exile: How I Got Banned for Life from AirBnB | by Jackson Cunningham | Medium
(original)
-
I Fought The PayPal And I Won - by Yassine Meskhout
(original)
-
I Fought The PayPal And I Won - by Yassine Meskhout
(original)
-
Neuromancer – Japanese words… – Quantum Tunnel
(original)
-
Reddit’s database has two tables | Kevin Burke
(original)
-
Research log: PCB stepper motor
(original)
-
Be anonymous
(original)
-
3D-printed Guns in 2022 - Everything You Need to Know (Updated) - Legionary
(original)
-
In Memory of Syd Mead: The Grandfather of Concept Design - ArtStation Magazine
(original)
-
6 Months as a Full Time Pancreas. Managing our Son’s Type 1 Diabetes with… | by Graham Jenson | Maori Geek
(original)
-
Hicetnunc and the Merits of Web3 - mattdesl
(original)
-
Intermittent fasting may help heal nerve damage
(original)
-
My full statement regarding DOOM Eternal | by Mick Gordon | Nov, 2022 | Medium
(original)
-
Milky Eggs » Blog Archive » A correct model of Olympus DAO
(original)
-
The singularity is very close - by Kai Christensen
(original)
-
Moxie Marlinspike >> Blog >> My first impressions of web3
(original)
-
Estonia: Warning the World About Russia - New Lines Magazine
(original)
-
🏳️🌈 Izzy Kamikaze 🏳️⚧️🦕🦖 (@IzzyKamikaze): "KiwiFarms:
- Figured out that @keffals was staying with @ellenfromnowon in Belfast
- Searched thousands of photos of @ellenfromnowon's cat on @thegoodcatboy for 'any glimpse of the outside'
- Cross-referenced all satellite imagery of Belfast
- Doxxed and threatened her life" | nitter
(original)
-
GREG ISENBERG (@gregisenberg): "Stop calling your Instagram audience a community
Audience ≠ community
How to think about audience vs community:" | nitter
(original)
-
altmorty comments on London is seeing a wave of "sophisticated" crypto phone muggings
(original)
-
George Orwell: Pacifism and the War
(original)
-
Paul Butler – Betting Against Bitcoin
(original)
-
“A Pleasure to Burn”: We Are Closer to Bradbury’s Dystopia Than Orwell’s or Huxley’s
(original)
-
society collapse
(original)
-
The cryptocurrency dons of Beirut - Rest of World
(original)
-
Opinion: Is Web3 a Scam? - Stack Diary
(original)
-
Open AI gets GPT-3 to work by hiring an army of humans to fix GPT’s bad answers. Interesting questions involving the mix of humans and computer algorithms in Open AI’s GPT-3 program | Statistical Modeling, Causal Inference, and Social Science
(original)
-
Build Your Career on Dirty Work | Stay SaaSy
(original)
-
OpenSea, Web3, and Aggregation Theory – Stratechery by Ben Thompson
(original)
-
Google has locked my account for sharing a historical archive they labeled as "terrorist Activity" - Google Drive Community
(original)
-
How I used indie hacking to sponsor my own greencard | Swizec Teller
(original)
-
The Strategic Theory of John Boyd - Tasshin
(original)
-
We Are Absolutely Horrible At Stopping Financial Scams | by Peter Shanosky | Making of a Millionaire
(original)
-
Thread by @CatchKristen on Thread Reader App – Thread Reader App
(original)
-
Thread by @cdixon on Thread Reader App – Thread Reader App
(original)
-
Thread by @jonwu_ on Thread Reader App – Thread Reader App
(original)
-
Thread by @IlvesToomas on Thread Reader App – Thread Reader App
(original)
-
What Is Quiet Quitting? Why Companies Worry About New Trend | TIME
(original)
-
Tom Blomfield: Monzo growth
(original)
-
Mat De Sousa on Twitter: "The Shopify App ecosystem can be a jungle.
So let me help you.
5 facts you wish you knew before starting your Shopify App 👇" / Twitter
(original)
-
Jake Hanrahan on Twitter: "I've been sent this video. Someone has adapted an FGC-9 3D-printed gun to fire full-auto. It can be switched between full and semi-auto. https://t.co/aUS6aSvEWh" / Twitter
(original)
-
Angry Staff Officer on Twitter: "I've got jumbles of Tolkien quotes in my head
“I wish it need not have happened in my time,” said Frodo
“So do I,” said Gandalf, “and so do all who live to see such times. But that is not for them to decide. All we have to decide is what to do with the time that is given us.”" / Twitter
(original)
-
Becoming More Like The Culture, No. 1: Economics (v0.1)
(original)
-
Becoming More Like The Culture, No. 1: Economics (v0.1)
(original)
-
Risk Everything | Gigaom
(original)
-
Why Cash Flow is More Important Than Wealth | Practically Independent
(original)
-
Crypto And The Apocalypse - by Matti 👾 - Wrong A Lot
(original)
-
For the Kremlin, war crimes are not mistakes but tactics | Russia-Ukraine war | Al Jazeera
(original)
-
The messages that survived civilisation's collapse - BBC Future
(original)
-
Saudi Crown Prince’s $500 Billion ’Smart City’ Faces Major Setbacks
(original)
-
The Case for Abolishing Elections - Boston Review
(original)
-
Why Don't You Use ...
(original)
-
How We Made $183,000 Last Year Via Rental Arbitrage on Airbnb
(original)
-
Infinite Games: How Crypto Is ‘LARPing’
(original)
-
You don’t want to be on Cloudflare’s naughty list | Ctrl blog
(original)
-
Company or cult? | The Economist
(original)
-
Shutting down my baby | fabiandietrich.com
(original)
-
Crypto’s hottest game is facing an economic maelstrom | Financial Times
(original)
-
When the Web3 bubble pops, real world assets will survive | Financial Times
(original)
-
How "Tactical" Infiltrated Everyday Life
(original)
-
China ‘fires hypersonic missile that circles globe before striking target’ | The Independent
(original)
-
Spotify’s Failed #SquadGoals
(original)
-
Longevity FAQ — Laura Deming
(original)
-
(24) Impermanent Loss: What it Is and How to Avoid It | LinkedIn
(original)
-
After a Zombie Apocalypse, Here Are 9 Keys to Rebuilding a Civilization | Live Science
(original)
-
Societal Breakdown & Self Sufficiency: Let the Collapse Start with Me | National Review
(original)
-
To Firmly Drive Common Prosperity
(original)
-
Rise of the garage genome hackers | New Scientist
(original)
-
New Biographies of Stanisław Lem, Reviewed | The New Yorker
(original)
-
Noonsite | Italy
(original)
-
Writing is a Single-Player Game
(original)
-
The metaverse is bulls**t | PC Gamer
(original)
-
Westinghouse sees a tech disrupter in its eVinci microreactor
(original)
-
Feed
(original)
-
(16) I've seen people posting their tattoos. So here's an interpretation of 3Jane from the sprawl trilogy : Cyberpunk
(original)
-
(16) List of resources : HackForUkraine
(original)
-
(16) Neuromancer Terms and Definitions : Neuromancer
(original)
-
(16) Raspberry pi deck I put together : cyberDeck
(original)
-
(16) is there money in adult games? : gamedev
(original)
-
(16) is there money in adult games? : gamedev
(original)
-
Boost for blockchain in China as Xinhua to issue photos as NFTs | Reuters
(original)
-
10 Hotel Safety Tips from a Former Intelligence Officer | 2018-06-11 | Security Magazine
(original)
-
The "misinformation problem" seems like misinformation
(original)
-
What the Dugin assassination tells us about Russia | The Spectator
(original)
-
The Token Disconnect
(original)
-
1000+ Swedish tech startups & scaleups – the ultimate list (2022)
(original)
-
Inside the Palace With Mohammed bin Salman - The Atlantic
(original)
-
The super-rich ‘preppers’ planning to save themselves from the apocalypse | The super-rich | The Guardian
(original)
-
Russia’s War in Ukraine Is ‘Dragging On,' Belarus Leader Admits - The Moscow Times
(original)
-
Inside Syd Mead’s visions of the future, from Blade Runner to Tron - The Verge
(original)
-
William Gibson’s Neuromancer: Does the Edge Still Bleed? | Tor.com
(original)
-
Why OPSEC Is for Everyone, Not Just for People with Something to Hide | Tripwire
(original)
-
Inflation’s big casualty: Rents up 40 percent in some cities, forcing millions to find another place to live - The Washington Post
(original)
-
P-versus-NP page
(original)
-
How Tor Is Fighting—and Beating—Russian Censorship | WIRED
(original)
-
Proof of stake is a scam and the people promoting it are scammers
(original)
-
Post-apocalyptic programming
(original)