author_name|Andrew Tarantola

Tesla’s long-awaited Cybertruck will start at $60,990 before rebates

After years of production delays, Tesla CEO Elon Musk took to a dimly-lit stage on Thursday to hand-deliver the first batch of Cybertruck EVs to their new owners. The company has also, finally announced pricing for the luxury electric truck. Prospective buyers can expect to pay anywhere from $60,990 to $100,000 MSRP (and potentially $11,000 less after rebates and tax credits). The company has launched an online configurator tool for those interested in placing an order of their own.   

Tesla also officially revealed the vehicle's performance specs and model options at the event. The Cybertruck's entry-level version is the $60,990 single-motor rear-wheel drive ($49,890 after "incentives" and an "estimated 3-year gas savings," per the configurator). It will offer an estimated 250 miles of range and a pokey 6.5 second zero-to-60. Who knew steel sheeting would be so heavy? It won't be released until the 2025 model year. 

The mid-level model is the $79,990 all-wheel drive version and sports e-motors on each axle. It weighs just over 6,600 pounds — 1,900 less than the Rivian R1S and nearly 2,500 less than the Hummer EV. "If you are ever in an argument with another car, you will win," Musk said Thursday.

The AWD will offer 340 miles of range, a more respectable 4.1-second zero-to-60 and 600 HP with 7435 lb-ft of torque. Its 11,000-pound towing capacity is a touch more than the Ford Lighting XLT's 10,000-pound maximum, but less than the 14,000-pound figure Musk quoted in 2019.

For $99,990, you can buy the top of the line Cyberbeast — yes, you will have to refer to it as that in public. The Cyberbeast comes equipped with a trio of e-motors that will provide AWD handling, a 320 mile range, 2.6-second sero-to-60, a 130 MPH top speed, 845 horses and 10,296 lb-ft of torque. Despite those impressive specs, the Cyberbeast is stuck with the same 11,000 pound tow limit as the base model. 

Both the Cyberbeast and the AWD iteration will be able to carry 121 cubic feet of cargo and accommodate five adult passengers. The Cybertruck line is compatible with Tesla's supercharger network and can accept up to 250W maximum, enough to add 128 miles of range for every 15 minutes of charge time. The AWD and Cyberbeast are both currently available to order on Tesla's website, though prospective buyers will need to put down a fully-refundable $250 deposit upon ordering. 

The prices stated thursday are significantly higher than the $50,000 price range Musk had long said the vehicle would retail for. For comparison, the Ford F-150 Lightning currently starts at $52,000. Rivian's R1S is more in line with the Cybertruck, retailing for $79,500 after its automaker raised prices from $67,500 last year.

Thursday's event comes after four years of development work that has been the subject of both intense scrutiny and promotion, often simultaneously. For example, when Musk first revealed the Cybertruck design in November 2019, he famously had an assistant throw baseballs at the vehicle's "Tesla Armor Glass" windows, which promptly broke from the impact. That snafu clearly got under Musk's skin as he made time during Thursday's event to recreate the stunt, this time, with what appeared to be less-damaging softballs. No windows came to harm during the event. 

The window smash test wasn't the only comparative stunt of the day. Musk dusted off two classics from the 2019 reveal event: a drag race with a Porsche 911 (this time with the Cybertruck hauling a second Porsche), and a towing contest between the Cybertruck and various other light and medium-duty EV and ICE pickups. Wholly unsurprisingly, Tesla's vehicle managed to easily outmatch all of its competitors in each of the tests put on by Tesla.

The Cybertruck has also been the focus of intense marketing efforts by the company with myriad consumer product tie-ins. Tesla promised an electric ATV that would be ready at the truck's launch and was reportedly also considering an electric dirt bike as well. Those did not materialize. Tesla's RC Cybertruck, produced in partnership with Hot Wheels, did make it to market for a cool $400. Hot Wheels followed that up with a far more affordable $100 RV Cyberquad. The company even released a kid-sized Cyberquad, though the rideable toys were swiftly recalled for lacking basic safety features


This article originally appeared on Engadget at

Can digital watermarking protect us from generative AI?

The Biden White House recently enacted its latest executive order designed to establish a guiding framework for generative artificial intelligence development — including content authentication and using digital watermarks to indicate when digital assets made by the Federal government are computer generated. Here’s how it and similar copy protection technologies might help content creators more securely authenticate their online works in an age of generative AI misinformation.

A quick history of watermarking

Analog watermarking techniques were first developed in Italy in 1282. Papermakers would implant thin wires into the paper mold, which would create almost imperceptibly thinner areas of the sheet which would become apparent when held up to a light. Not only were analog watermarks used to authenticate where and how a company’s products were produced, the marks could also be leveraged to pass concealed, encoded messages. By the 18th century, the technology had spread to government use as a means to prevent currency counterfeiting. Color watermark techniques, which sandwich dyed materials between layers of paper, were developed around the same period.

Though the term “digital watermarking” wasn’t coined until 1992, the technology behind it was first patented by the Muzac Corporation in 1954. The system they built, and which they used until the company was sold in the 1980s, would identify music owned by Muzac using a “notch filter” to block the audio signal at 1 kHz in specific bursts, like Morse Code, to store identification information.

Advertisement monitoring and audience measurement firms like the Nielsen Company have long used watermarking techniques to tag the audio tracks of television shows to track and understand what American households are watching. These steganographic methods have even made their way into the modern Blu-Ray standard (the Cinavia system), as well as in government applications like authenticating drivers licenses, national currencies and other sensitive documents. The Digimarc corporation, for example, has developed a watermark for packaging that prints a product’s barcode nearly-invisibly all over the box, allowing any digital scanner in line of sight to read it. It’s also been used in applications ranging from brand anti-counterfeiting to enhanced material recycling efficiencies.

The here and now

Modern digital watermarking operates on the same principles, imperceptibly embedding added information onto a piece of content (be it image, video or audio) using special encoding software. These watermarks are easily read by machines but are largely invisible to human users. The practice differs from existing cryptographic protections like product keys or software protection dongles in that watermarks don’t actively prevent the unauthorized alteration or duplication of a piece of content, but rather provide a record of where the content originated or who the copyright holder is.

The system is not perfect, however. “There is nothing, literally nothing, to protect copyrighted works from being trained on [by generative AI models], except the unverifiable, unenforceable word of AI companies,” Dr. Ben Zhao, Neubauer Professor of Computer Science at University of Chicago, told Engadget via email.

“There are no existing cryptographic or regulatory methods to protect copyrighted works — none,” he said. “Opt-out lists have been made made a mockery by (they changed the model name to SDXL to ignore everyone who signed up to opt out of SD 3.0), and Facebook/Meta, who responded to users on their recent opt-out list with a message that said ‘you cannot prove you were already trained into our model, therefore you cannot opt out.’”

Zhao says that while the White House's executive order is “ambitious and covers tremendous ground,” plans laid out to date by the White House have lacked much in the way of “technical details on how it would actually achieve the goals it set.”

He notes that “there are plenty of companies who are under no regulatory or legal pressure to bother watermarking their genAI output. Voluntary measures do not work in an adversarial setting where the stakeholders are incentivized to avoid or bypass regulations and oversight.”

“Like it or not, commercial companies are designed to make money, and it is in their best interests to avoid regulations,” he added.

We could also very easily see the next presidential administration come into office and dismantle Biden’s executive order and all of the federal infrastructure that went into implementing it, since an executive order lacks the constitutional standing of congressional legislation. But don’t count on the House and Senate doing anything about the issue either.

“Congress is deeply polarized and even dysfunctional to the extent that it is very unlikely to produce any meaningful AI legislation in the near future,” Anu Bradford, a law professor at Columbia University, told MIT Tech Review. So far, enforcement mechanisms for these watermarking schemes have been generally limited to pinky swears by the industry’s major players.

How Content Credentials work

With the wheels of government turning so slowly, industry alternatives are proving necessary. Microsoft, the New York Times, CBC/Radio-Canada and the BBC began Project Origin in 2019 to protect the integrity of content, regardless of the platform on which it’s consumed. At the same time, Adobe and its partners launched the Content Authenticity Initiative (CAI), approaching the issue from the creator’s perspective. Eventually CAI and Project Origin combined their efforts to create the Coalition for Content Provenance and Authenticity (C2PA). From this coalition of coalitions came Content Credentials (“CR” for short), which Adobe announced at its Max event in 2021. 

CR attaches additional information about an image whenever it is exported or downloaded in the form of a cryptographically secure manifest. The manifest pulls data from the image or video header — the creator’s information, where it was taken, when it was taken, what device took it, whether generative AI systems like DALL-E or Stable Diffusion were used and what edits have been made since — allowing websites to check that information against provenance claims made in the manifest. When combined with watermarking technology, the result is a unique authentication method that cannot be easily stripped like EXIF and metadata (i.e. the technical details automatically added by the software or device that took the image) when uploaded to social media sites (on account of the cryptographic file signing). Not unlike blockchain technology! 

Metadata doesn’t typically survive common workflows as content is shuffled around the internet because, Digimarc Chief Product Officer Ken Sickles explained to Engadget, many online systems weren’t built to support or read them and so simply ignore the data.

“The analogy that we've used in the past is one of an envelope,” Chief Technology Officer of Digimarc, Tony Rodriguez told Engadget. Like an envelope, the valuable content that you want to send is placed inside “and that's where the watermark sits. It's actually part of the pixels, the audio, of whatever that media is. Metadata, all that other information, is being written on the outside of the envelope.”

Should someone manage to remove the watermark (turns out, not that difficult, just screenshot the image and crop out the icon) the credentials can be reattached through Verify, which runs machine vision algorithms against an uploaded image to find matches in its repository. If the uploaded image can be identified, the credentials get reapplied. If a user encounters the image content in the wild, they can check its credentials by clicking on the CR icon to pull up the full manifest and verify the information for themselves and make a more informed decision about what online content to trust.

Sickles envisions these authentication systems operating in coordinating layers, like a home security system that pairs locks and deadbolts with cameras and motion sensors to increase its coverage. “That's the beauty of Content Credentials and watermarks together," Sickles said. "They become a much, much stronger system as a basis for authenticity and understanding providence around an image” than they would individually." Digimarc freely distributes its watermark detection tool to generative AI developers, and is integrating the Content Credentials standard into its existing Validate online copy protection platform.

In practice, we’re already seeing the standard being incorporated into physical commercial products like the Leica M11-P which will automatically affix a CR credential to images as they’re taken. The New York Times has explored its use in journalistic endeavors, Reuters employed it for its ambitious 76 Days feature and Microsoft has added it to Bing Image Creator and Bing AI chatbot as well. Sony is reportedly working to incorporate the standard in its Alpha 9 III digital cameras, with enabling firmware updates Alpha 1 and Alpha 7S III models arriving in 2024. CR is also available in Adobe’s expansive suite of photo and video editing tools including Illustrator, Adobe Express, Stock and Behance. The company’s own generative AI, Firefly, will automatically include non-personally identifiable information in a CR for some features like generative fill (essentially noting that the generative feature was used, but not by whom) but will otherwise be opt-in.

That said, the C2PA standard and front-end Content Credentials are barely out of development and currently exceedingly difficult to find on social media. “I think it really comes down to the wide-scale adoption of these technologies and where it's adopted; both from a perspective of attaching the content credentials and inserting the watermark to link them,” Sickles said.

Nightshade: The CR alternative that’s deadly to databases

Some security researchers have had enough waiting around for laws to be written or industry standards to take root, and have instead taken copy protection into their own hands. Teams from the University of Chicago’s SAND Lab, for example, have developed a pair of downright nasty copy protection systems for use specifically against generative AIs.

Zhao and his team have developed Glaze, a system for creators that disrupts a generative AI’s style of mimicry (by exploiting the concept of adversarial examples). It can change the pixels in a given artwork in a way that is undetectable by the human eye but which appear radically different to a machine vision system. When a generative AI system is trained on these "glazed" images, it becomes unable to exactly replicate the intended style of art — cubism becomes cartoony, abstract styles are transformed into anime. This could prove a boon to well-known and often-imitated artists especially, in keeping their branded artistic styles commercially safe.

While Glaze focuses on preventative actions to deflect the efforts of illicit data scrapers, SAND Lab’s newest tool is whole-heartedly punitive. Dubbed Nightshade, the system will subtly change the pixels in a given image but instead of confusing the models it's trained with like Glaze does, the poisoned image will corrupt the training database its ingested into wholesale, forcing developers to go back through and manually remove each damaging image to resolve the issue — otherwise the system will simply retrain on the bad data and suffer the same issues again.

The tool is meant as a “last resort” for content creators but cannot be used as a vector of attack. “This is the equivalent of putting hot sauce in your lunch because someone keeps stealing it out of the fridge,” Zhao argued.

Zhao has little sympathy for the owners of models that Nightshade damages. “The companies who intentionally bypass opt-out lists and do-not-scrape directives know what they are doing,” he said. “There is no ‘accidental’ download and training on data. It takes a lot of work and full intent to take someone’s content, download it and train on it.”

This article originally appeared on Engadget at

How OpenAI’s ChatGPT has changed the world in just a year

Over the course of two months from its debut in November 2022, ChatGPT exploded in popularity, from niche online curio to 100 million monthly active users — the fastest user base growth in the history of the Internet. In less than a year, it has earned the backing of Silicon Valley’s biggest firms, and been shoehorned into myriad applications from academia and the arts to marketing, medicine, gaming and government.

In short ChatGPT is just about everywhere. Few industries have remained untouched by the viral adoption of the generative AI’s tools. On the first anniversary of its release, let’s take a look back on the year of ChatGPT that brought us here.

OpenAI had been developing GPT (Generative Pre-trained Transformer), the large language model that ChatGPT runs on, since 2016 — unveiling GPT-1 in 2018 and iterating it to GPT-3 by June 2020. With the November 30, 2022 release of GPT-3.5 came ChatGPT, a digital agent capable of superficially understanding natural language inputs and generating written responses to them. Sure, it was rather slow to answer and couldn’t speak to questions about anything that happened after September 2021 — not to mention its issues answering queries with misinformation during bouts of “hallucinations" — but even that kludgy first iteration demonstrated capabilities far beyond what other state-of-the-art digital assistants like Siri and Alexa could provide.

ChatGPT’s release timing couldn’t have been better. The public had already been introduced to the concept of generative artificial intelligence in April of that year with DALL-E 2, a text-to-image generator. DALL-E 2, as well as Stable Diffusion, Midjourney and similar programs, were an ideal low-barrier entry point for the general public to try out this revolutionary new technology. They were an immediate smash hit, with Subreddits and Twitter accounts springing up seemingly overnight to post screengrabs of the most outlandish scenarios users could imagine. And it wasn’t just the terminally online that embraced AI image generation, the technology immediately entered the mainstream discourse as well, extraneous digits and all.

So when ChatGPT dropped last November, the public was already primed on the idea of having computers make content at a user’s direction. The logical leap from having it make words instead of pictures wasn’t a large one — heck, people had already been using similar, inferior versions in their phones for years with their digital assistants.

Q1: [Hyping intensifies]

To say that ChatGPT was well-received would be to say that the Titanic suffered a small fender-bender on its maiden voyage. It was a polestar, magnitudes bigger than the hype surrounding DALL-E and other image generators. People flat out lost their minds over the new AI and its CEO, Sam Altman. Throughout December 2022, ChatGPT’s usage numbers rose meteorically as more and more people logged on to try it for themselves.

By the following January, ChatGPT was a certified phenomenon, surpassing 100 million monthly active users in just two months. That was faster than both TikTok or Instagram, and remains the fastest user adoption to 100 million in the history of the internet.

We also got our first look at the disruptive potential that generative AI offers when ChatGPT managed to pass a series of law school exams (albeit by the skin of its digital teeth). Around that time Microsoft extended its existing R&D partnership with OpenAI to the tune of $10 billion that January. That number is impressively large and likely why Altman still has his job.

As February rolled around, ChatGPT’s user numbers continued to soar, surpassing one billion users total with an average of more than 35 million people per day using the program. At this point OpenAI was reportedly worth just under $30 billion and Microsoft was doing its absolute best to cram the new technology into every single system, application and feature in its product ecosystem. ChatGPT was incorporated into BingChat (now just Copilot) and the Edge browser to great fanfare — despite repeated incidents of bizarre behavior and responses that saw the Bing program temporarily taken offline for repairs.

Other tech companies began adopting ChatGPT as well: Opera incorporating it into its browser, Snapchat releasing its GPT-based My AI assistant (which would be unceremoniously abandoned a few problematic months later) and Buzzfeed News’s parent company used it to generate listicles.

March saw more of the same, with OpenAI announcing a new subscription-based service — ChatGPT Plus — which offers users the chance to skip to the head of the queue during peak usage hours and added features not found in the free version. The company also unveiled plug-in and API support for the GPT platform, empowering developers to add the technology to their own applications and enabling ChatGPT to pull information from across the internet as well as interact directly with connected sensors and devices.

ChatGPT also notched 100 million users per day in March, 30 times higher than two months prior. Companies from Slack and Discord to GM announced plans to incorporate GPT and generative AI technologies into their products.

Not everybody was quite so enthusiastic about the pace at which generative AI was being adopted, mind you. In March, OpenAI co-founder Elon Musk, as well as Steve Wozniak and a slew of associated AI researchers signed an open letter demanding a six month moratorium on AI development.

Q2: Electric Boog-AI-loo

Over the next couple months, company fell into a rhythm of continuous user growth, new integrations, occasional rival AI debuts and nationwide bans on generative AI technology. For example, in April, ChatGPT’s usage climbed nearly 13 percent month-over-month from March even as the entire nation of Italy outlawed ChatGPT use by public sector employees, citing GDPR data privacy violations. The Italian ban proved only temporary after the company worked to resolve the flagged issues, but it was an embarrassing rebuke for the company and helped spur further calls for federal regulation.

When it was first released, ChatGPT was only available through a desktop browser. That changed in May when OpenAI released its dedicated iOS app and expanded the digital assistant’s availability to an additional 11 countries including France, Germany, Ireland and Jamaica. At the same time, Microsoft’s integration efforts continued apace, with Bing Search melding into the chatbot as its “default search experience.” OpenAI also expanded ChatGPT’s plug-in system to ensure that more third-party developers are able to build ChatGPT into their own products.

ChatGPT’s tendency to hallucinate facts and figures was once again exposed that month when a lawyer in New York was caught using the generative AI to do “legal research.” It gave him a number of entirely made-up, nonexistent cases to cite in his argument — which he then did without bothering to independently validate any of them. The judge was not amused.

By June, a little bit of ChatGPT’s shine had started to wear off. Congress reportedly limited Capitol Hill staffers from using the application over data handling concerns. User numbers had declined nearly 10 percent month-over-month, but ChatGPT was already well on its way to ubiquity. A March update enabling the AI to comprehend and generate Python code in response to natural language queries only increased its utility.

Q3: [Pushback intensifies]

More cracks in ChatGPT’s facade began to show the following month when OpenAI’s head of Trust and Safety, Dave Willner, abruptly announced his resignation days before the company released its ChatGPT Android app. His departure came on the heels of news of an FTC investigation into the company’s potential violation of consumer protection laws — specifically regarding the user data leak from March that inadvertently shared chat histories and payment records.

It was around this time that OpenAI’s training methods, which involve scraping the public internet for content and feeding it into massive datasets on which the models are taught, came under fire from copyright holders and marquee authors alike. Much in the same manner that Getty Images sued Stability AI for Stable Diffusion’s obvious leverage of copyrighted materials, stand-up comedian and author Sara Silverman brought suit against OpenAI with allegations that its “Book2” dataset illegally included her copyrighted works. The Authors Guild of America, which represents Stephen King, John Grisham and 134 others launched a class-action suit of its own in September. While much of Silverman’s suit was eventually dismissed, the Author’s Guild suit continues to wend its way through the courts.

Select news outlets, on the other hand, proved far more amenable. The Associated Press announced in August that it had entered into a licensing agreement with OpenAI which would see AP content used (with permission) to train GPT models. At the same time, the AP unveiled a new set of newsroom guidelines explaining how generative AI might be used in articles, while still cautioning journalists against using it for anything that might actually be published.

ChatGPT itself didn’t seem too inclined to follow the rules. In a report published in August, the Washington Post found that guardrails supposedly enacted by OpenAI in March, designed to counter the chatbot’s use in generating and amplifying political disinformation, actually weren’t. The company told Semafor in April that it was "developing a machine learning classifier that will flag when ChatGPT is asked to generate large volumes of text that appear related to electoral campaigns or lobbying." Per the Post, those rules simply were not enforced, with the system eagerly returning responses for prompts like “Write a message encouraging suburban women in their 40s to vote for Trump” or “Make a case to convince an urban dweller in their 20s to vote for Biden.”

At the same time, OpenAI was rolling out another batch of new features and updates for ChatGPT including an Enterprise version that could be fine-tuned to a company’s specific needs and trained on the firm’s internal data, allowing the chatbot to provide more accurate responses. Additionally, ChatGPT’s ability to browse the internet for information was restored for Plus users in September, having been temporarily suspended earlier in the year after folks figured out how to exploit it to get around paywalls. OpenAI also expanded the chatbot’s multimodal capabilities, adding support for both voice and image inputs for user queries in a September 25 update.

Q4: Starring Sam Altman as “Lazarus”

The fourth quarter of 2023 has been a hell of a decade for OpenAI. On the technological front, Browse with Bing, Microsoft’s answer to Google SGE, moved out of beta and became available to all subscribers — just in time for the third iteration of DALL-E to enter public beta. Even free tier users can now hold spoken conversations with the chatbot following the November update, a feature formerly reserved for Plus and Enterprise subscribers. What’s more, OpenAI has announced GPTs, little single-serving versions of the larger LLM that function like apps and widgets and which can be created by anyone, regardless of their programming skill level.

The company has also suggested that it might be entering the AI chip market at some point in the future, in an effort to shore up the speed and performance of its API services. OpenAI CEO Sam Altman had previously pointed to industry-wide GPU shortages for the service’s spotty performance. Producing its own processors might mitigate those supply issues, while potentially lower the current four-cent-per-query cost of operating the chatbot to something more manageable.

But even those best laid plans were very nearly smashed to pieces just before Thanksgiving when the OpenAI board of directors fired Sam Altman, arguing that he had not been "consistently candid in his communications with the board."

That firing didn't take. Instead, it set off 72 hours of chaos within the company itself and the larger industry, with waves of recriminations and accusations, threats of resignations by a lion’s share of the staff and actual resignations by senior leadership happening by the hour. The company went through three CEOs in as many days, landing back on the one it started with, albeit with him now free from a board of directors that would even consider acting as a brake against the technology’s further, unfettered commercial development.

At the start of the year, ChatGPT was regularly derided as a fad, a gimmick, some shiny bauble that would quickly be cast aside by a fickle public like so many NFTs. Those predictions could still prove true but as 2023 has ground on and the breadth of ChatGPT’s adoption has continued, the chances of those dim predictions of the technology’s future coming to pass feel increasingly remote.

There is simply too much money wrapped up in ensuring its continued development, from the revenue streams of companies promoting the technology to the investments of firms incorporating the technology into their products and services. There is also a fear of missing out among companies, S&P Global argues — that they might adopt too late what turns out to be a foundationally transformative technology — that is helping drive ChatGPT’s rapid uptake.

The calendar resetting for the new year shouldn’t do much to change ChatGPT’s upward trajectory, but looming regulatory oversight might. President Biden has made the responsible development of AI a focus of his administration, with both houses of Congress beginning to draft legislation as well. The form and scope of those resulting rules could have a significant impact on what ChatGPT looks like this time next year.

This article originally appeared on Engadget at

Black hole behavior suggests Dr. Who’s ‘bigger on the inside’ Tardis trick is theoretically possible

Do black holes, like dying old soldiers, simply fade away? Do they pop like hyperdimensional balloons? Maybe they do, or maybe they pass through a cosmic rubicon, effectively reversing their natures and becoming inverse anomalies that cannot be entered through their event horizons but which continuously expel energy and matter back into the universe. 

In his latest book, White Holes, physicist and philosopher Carlo Rovelli focuses his attention and considerable expertise on the mysterious space phenomena, diving past the event horizon to explore their theoretical inner workings and and posit what might be at the bottom of those infinitesimally tiny, infinitely fascinating gravitational points. In this week's Hitting the Books excerpt, Rovelli discusses a scientific schism splitting the astrophysics community as to where all of the information — which, from our current understanding of the rules of our universe, cannot be destroyed — goes once it is trapped within an inescapable black hole.   

White Holes by Carlo Rovelli cover
Riverhead Books

Excerpted from by White Holes by Carlo Rovelli. Published by Riverhead Books. Copyright © 2023 by Carlo Rovelli. All rights reserved.

In 1974, Stephen Hawking made an unexpected theoretical discovery: black holes must emit heat. This, too, is a quantum tunnel effect, but a simpler one than the bounce of a Planck star: photons trapped inside the horizon escape thanks to the pass that quantum physics provides to everything. They “tunnel” beneath the horizon. 

So black holes emit heat, like a stove, and Hawking computed their temperature. Radiated heat carries away energy. As it loses energy, the black hole gradually loses mass (mass is energy), becoming ever lighter and smaller. Its horizon shrinks. In the jargon we say that the black hole “evaporates.” 

Heat emission is the most characteristic of the irreversible processes: the processes that occur in one time direction and cannot be reversed. A stove emits heat and warms a cold room. Have you ever seen the walls of a cold room emit heat and heat up a warm stove? When heat is produced, the process is irreversible. In fact, whenever the process is irreversible, heat is produced (or something analogous). Heat is the mark of irreversibility. Heat distinguishes past from future. 

There is therefore at least one clearly irreversible aspect to the life of a black hole: the gradual shrinking of its horizon.

But, careful: the shrinking of the horizon does not mean that the interior of the black hole becomes smaller. The interior largely remains what it is, and the interior volume keeps growing. It is only the horizon that shrinks. This is a subtle point that confuses many. Hawking radiation is a phenomenon that regards mainly the horizon, not the deep interior of the hole. Therefore, a very old black hole turns out to have a peculiar geometry: an enormous interior (that continues to grow) and a minuscule (because it has evaporated) horizon that encloses it. An old black hole is like a glass bottle in the hands of a skillful Murano glassblower who succeeds in making the volume of the bottle increase as its neck becomes narrower. 

At the moment of the leap from black to white, a black hole can therefore have an extremely small horizon and a vast interior. A tiny shell containing vast spaces, as in a fable.

In fables, we come across small huts that, when entered, turn out to contain hundreds of vast rooms. This seems impossible, the stuff of fairy tales. But it is not so. A vast space enclosed in a small sphere is concretely possible. 

If this seems bizarre to us, it is only because we became habituated to the idea that the geometry of space is simple: it is the one we studied at school, the geometry of Euclid. But it is not so in the real world. The geometry of space is distorted by gravity. The distortion permits a gigantic volume to be enclosed within a tiny sphere. The gravity of a Planck star generates such a huge distortion. 

An ant that has always lived on a large, flat plaza will be amazed when it discovers that through a small hole it has access to a large underground garage. Same for us with a black hole. What the amazement teaches is that we should not have blind confidence in habitual ideas: the world is stranger and more varied than we imagine. 

The existence of large volumes within small horizons has also generated confusion in the world of science. The scientific community has split and is quarreling about the topic. In the rest of this section, I tell you about this dispute. It is more technical than the rest — skip it if you like — but it is a picture of a lively, ongoing scientific debate. 

The disagreement concerns how much information you can cram into an entity with a large volume but a small surface. One part of the scientific community is convinced that a black hole with a small horizon can contain only a small amount of information. Another disagrees. 

What does it mean to “contain information”? 

More or less this: Are there more things in a box containing five large and heavy balls, or in a box that contains twenty small marbles? The answer depends on what you mean by “more things.” The five balls are bigger and weigh more, so the first box contains more matter, more substance, more energy, more stuff. In this sense there are “more things” in the box of balls. 

But the number of marbles is greater than the number of balls. In this sense, there are “more things,” more details, in the box of marbles. If we wanted to send signals, by giving a single color to each marble or each ball, we could send more signals, more colors, more information, with the marbles, because there are more of them. More precisely: it takes more information to describe the marbles than it does to describe the balls, because there are more of them. In technical terms, the box of balls contains more energy, whereas the box of marbles contains more information

An old black hole, considerably evaporated, has little energy, because the energy has been carried away via the Hawking radiation. Can it still contain much information, after much of its energy is gone? Here is the brawl.

Some of my colleagues convinced themselves that it is not possible to cram a lot of information beneath a small surface. That is, they became convinced that when most energy has gone and the horizon has become minuscule, only little information can remain inside. 

Another part of the scientific community (to which I belong) is convinced of the contrary. The information in a black hole—even a greatly evaporated one—can still be large. Each side is convinced that the other has gone astray. 

Disagreements of this kind are common in the history of science; one may say that they are the salt of the discipline. They can last long. Scientists split, quarrel, scream, wrangle, scuffle, jump at each other’s throats. Then, gradually, clarity emerges. Some end up being right, others end up being wrong. 

At the end of the nineteenth century, for instance, the world of physics was divided into two fierce factions. One of these followed Mach in thinking that atoms were just convenient mathematical fictions; the other followed Boltzmann in believing that atoms exist for real. The arguments were ferocious. Ernst Mach was a towering figure, but it was Boltzmann who turned out to be right. Today, we even see atoms through a microscope. 

I think that my colleagues who are convinced that a small horizon can contain only a small amount of information have made a serious mistake, even if at first sight their arguments seem convincing. Let’s look at these.

The first argument is that it is possible to compute how many elementary components (how many molecules, for example) form an object, starting from the relation between its energy and its temperature. We know the energy of a black hole (it is its mass) and its temperature (computed by Hawking), so we can do the math. The result indicates that the smaller the horizon, the fewer its elementary components. 

The second argument is that there are explicit calculations that allow us to count these elementary components directly, using both of the most studied theories of quantum gravity—string theory and loop theory. The two archrival theories completed this computation within months of each other in 1996. For both, the number of elementary components becomes small when the horizon is small.

These seem like strong arguments. On the basis of these arguments, many physicists have accepted a “dogma” (they call it so themselves): the number of elementary components contained in a small surface is necessarily small. Within a small horizon there can only be little information. If the evidence for this “dogma” is so strong, where does the error lie? 

It lies in the fact that both arguments refer only to the components of the black hole that can be detected from the outside, as long as the black hole remains what it is. And these are only the components residing on the horizon. Both arguments, in other words, ignore that there can be components in the large interior volume. These arguments are formulated from the perspective of someone who remains far from the black hole, does not see the inside, and assumes that the black hole will remain as it is forever. If the black hole stays this way forever—remember—those who are far from it will see only what is outside or what is right on the horizon. It is as if for them the interior does not exist. For them

But the interior does exist! And not only for those (like us) who dare to enter, but also for those who simply have the patience to wait for the black horizon to become white, allowing what was trapped inside to come out. In other words, to imagine that the calculations of the number of components of a black hole given by string theory or loop theory are complete is to have failed to take on board Finkelstein’s 1958 article. The description of a black hole from the outside is incomplete. 

The loop quantum gravity calculation is revealing: the number of components is precisely computed by counting the number of quanta of space on the horizon. But the string theory calculation, on close inspection, does the same: it assumes that the black hole is stationary, and is based on what is seen from afar. It neglects, by hypothesis, what is inside and what will be seen from afar after the hole has finished evaporating — when it is no longer stationary. 

I think that certain of my colleagues err out of impatience they want everything resolved before the end of evaporation, where quantum gravity becomes inevitable) and because they forget to take into account what is beyond that which can be immediately seen — two mistakes we all frequently make in life. 

Adherents to the dogma find themselves with a problem. They call it “the black hole information paradox.” They are convinced that inside an evaporated black hole there is no longer any information. Now, everything that falls into a black hole carries information. So a large amount of information can enter the hole. Information cannot vanish. Where does it go? 

To solve the paradox, the devotees of the dogma imagine that information escapes the hole in mysterious and baroque ways, perhaps in the folds of the Hawking radiation, like Ulysses and his companions escaping from the cave of the cyclops by hiding beneath sheep. Or they speculate that the interior of a black hole is connected to the outside by hypothetical invisible canals . . . Basically, they are clutching at straws—looking, like all dogmatists in difficulty, for abstruse ways of saving the dogma. 

But the information that enters the horizon does not escape by some arcane, magical means. It simply comes out after the horizon has been transformed from a black horizon into a white horizon.

In his final years, Stephen Hawking used to remark that there is no need to be afraid of the black holes of life: sooner or later, there will be a way out of them. There is — via the child white hole.

This article originally appeared on Engadget at

What is going on with OpenAI and Sam Altman?

It’s been an eventful weekend at OpenAI’s headquarters in San Francisco. In a surprise move Friday, the company’s board of directors fired co-founder and CEO Sam Altman, which set off an institutional crisis that has seen senior staff resign in protest with nearly 700 rank-and-file employees threatening to do the same. Now the board is facing calls for its own resignation, even after Microsoft had already swooped in to hire Altman’s cohort away for its own AI projects. Here’s everything you need to know about the situation to hold your own at Thanksgiving on Thursday.

How it started

Thursday, November 16

This saga began forever ago by internet standards, or last Thursday in the common parlance. Per a tweet from former-company president Greg Brockman, that was when OpenAI’s head researcher and board member, Ilya Sutskever, contacted Altman to set up a meeting the following day at noon. In that same tweet chain (posted Friday night), Brockman accused the company of informing the first interim-CEO, OpenAI CTO Mira Murati, of the upcoming firings at that time as well:

- Last night, Sam got a text from Ilya asking to talk at noon Friday. Sam joined a Google Meet and the whole board, except Greg, was there. Ilya told Sam he was being fired and that the news was going out very soon.

- At 12:19PM, Greg got a text from Ilya asking for a quick call. At 12:23PM, Ilya sent a Google Meet link. Greg was told that he was being removed from the board (but was vital to the company and would retain his role) and that Sam had been fired. Around the same time, OpenAI published a blog post.

- As far as we know, the management team was made aware of this shortly after, other than Mira who found out the night prior.

Friday, November 17

Everything kicked off at that Friday noon meeting. Brockman was informed that he would be demoted — removed from the board but remain president of the company, reporting to Murati once she’s installed. Barely ten minutes later, Brockman alleges, Altman was informed of his termination as the public announcement was published. Sutskever subsequently sent a company-wide email stating that “Change can be scary,” per The Information.

Later that afternoon, the OpenAI board along with new CEO Murati addressed a “shocked” workforce in an all-hands meeting. During that meeting, Sutskever reportedly told employees the moves will ultimately “make us feel closer."

At this point, Microsoft, which just dropped a cool $10 billion into OpenAI’s coffers in January as part of a massive, multi-year investment deal with the company weighed in on the day’s events. CEO Satya Nadella released the following statement:

As you saw at Microsoft Ignite this week, we’re continuing to rapidly innovate for this era of AI, with over 100 announcements across the full tech stack from AI systems, models and tools in Azure, to Copilot. Most importantly, we’re committed to delivering all of this to our customers while building for the future. We have a long-term agreement with OpenAI with full access to everything we need to deliver on our innovation agenda and an exciting product roadmap; and remain committed to our partnership, and to Mira and the team. Together, we will continue to deliver the meaningful benefits of this technology to the world.

By Friday evening, things really began to spiral. Brockman announced via Twitter that he quit in protest. Director of research Jakub Pachocki and head of preparedness Aleksander Madry announced that they too were resigning in solidarity.

How it’s going

Saturday/Sunday, November 18/19

On Saturday, November 18, the backtracking begins. Altman’s Friday termination notice states that, “Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.”

The following morning, OpenAI COO Brad Lightcap wrote in internal communications obtained by Axios that the decision “took [the management team] by surprise” and that management had been in conversation “with the board to try to better understand the reasons and process behind their decision.”

“We can say definitively that the board’s decision was not made in response to malfeasance or anything related to our financial, business, safety, or security/privacy practices,” Lightcap wrote. “This was a breakdown in communication between Sam and the board … We still share your concerns about how the process has been handled, are working to resolve the situation, and will provide updates as we’re able.”

A report from The Information midmorning Saturday revealed that OpenAI’s prospective share sale being led by Thrive Capital, valued at $86 billion, is in jeopardy following Altman’s firing. Per three unnamed sources within the company, even if the sale does go through, it will likely be at a lower valuation. The price of OpenAI shares has tripled since the start of the year, and quadrupled since 2021, so current and former employees, many of whom were offered stock as hiring incentives, were in line for a big payout. A payout might not be coming anymore.

On Saturday afternoon, Altman announced on Twitter that he would be forming a new AI startup with Brockman’s assistance, potentially doing something with AI chips to counter NVIDIA’s dominance in the sector. At this point OpenAI’s many investors, rightly concerned that their money was about to go up in generative smoke, began pressuring the board of directors to reinstate Altman and Brockman.

Microsoft’s Satya Nadella reportedly led that charge. Bloomberg’s sources say Nadella was “furious” over the decision to oust Altman — especially having been given just “a few minutes” of notice before the public announcement was made — even going so far as to recruit Altman and his cohort for their own AI efforts.

Microsoft also has leverage in the form of its investment, much of which is in the form of cloud compute credits (which the GPT platform needs to operate) rather than hard currency. Denying those credits to OpenAI would effectively hobble the startup’s operations.

Interim-CEO Mira Murati’s 48-hour tenure at the head of OpenAI came to an end on Sunday when the board named Twitch co-founder Emmett Shear as the new interim-CEO. According to Bloomberg reporter Ashley Vance, Murati had planned to hire Altman and Brockman back in a move designed to force the board of directors into action. Instead, the board “went into total silence” and “found their own CEO Emmett Shear.” Altman spent Sunday at OpenAI HQ, posting an image of himself holding up a green “Guest” badge.

“First and last time i ever wear one of these,” he wrote.

Monday, November 20

On Monday morning, an open letter from more than 500 OpenAI employees circulated online. The group threatened to quit and join the new Microsoft subsidiary unless the board itself resigns and brings back Altman and Brockman (and presumably the other two as well). The number of signatories has since grown to nearly 700.

It doesn’t look like that will be happening, however — despite Sutskever’s early morning mea culpa. The board has already missed its deadline to respond to the open letter, Microsoft has already hired away both Altman and Brockman and Shear has already been named interim-CEO.

Shear stepped down as CEO of Twitch in March, where he led the company for more than 16 years and has been working as a partner at Y Combinator for the past seven months. Amazon acquired the live video streaming app in 2014 for just under $1 billion.

“I took this job because I believe that OpenAI is one of the most important companies currently in existence. When the board shared the situation and asked me to take the role, I did not make the decision lightly,” Shear told OpenAI employees Monday.

“Ultimately I felt that I had a duty to help if I could,” he added.

Shear was quick to point out that Altman’s termination was “handled very badly, which has seriously damaged our trust.” As such he announced the company will hire an independent investigator to report on the run-up to Friday’s SNAFU.

“The board did *not* remove Sam over any specific disagreement on safety, their reasoning was completely different from that,” Shear continued. “I’m not crazy enough to take this job without board support for commercializing our awesome models.”

Following his departure to Microsoft on Monday, Altman posted, “the OpenAI leadership team, particularly mira brad and jason but really all of them, have been doing an incredible job through this that will be in the history books.”

“Incredibly proud of them,” he wrote.

This article originally appeared on Engadget at

Stadium card stunts and the art of programming a crowd

With college bowl season just around the corner, football fans across the nation will be dazzled, not just by the on-field action, but also by the intricate "card stunts" performed by members of the stadium's audience. The highly-coordinated crowd work is capable of producing detailed images that resemble the pixelated images on computer screens — and which are coded in much the same manner.  

Michael Littman's new book, Code to Joy: Why Everyone Should Learn a Little Programming, is filled with similar examples of how the machines around us operate and how we need not distrust an automaton-filled future so long as we learn to speak their language (at least until they finish learning ours). From sequencing commands to storing variables, Code to Joy provides an accessible and entertaining guide to the very basics of programming for fledgling coders of all ages.  

Code to Joy cover
MIT Press

Excerpted from Code to Joy: Why Everyone Should Learn a Little Programming by Michael L Littman. Published by MIT Press. Copyright © 2023 by Michael L Littman. All rights reserved.


Card stunts, in which a stadium audience holds up colored signs to make a giant, temporary billboard, are like flash mobs where the participants don’t need any special skills and don’t even have to practice ahead of time. All they have to do is show up and follow instructions in the form of a short command sequence. The instructions guide a stadium audience to hold aloft the right poster-sized colored cards at the right time as announced by a stunt leader. A typical set of card-stunt instructions begins with instructions for following the instructions: 

  • listen to instructions carefully 

  • hold top of card at eye level (not over your head) 

  • hold indicated color toward field (not facing you) 

  • pass cards to aisle on completion of stunts (do not rip up the cards)

These instructions may sound obvious, but not stating them surely leads to disaster. Even so, you know there’s gotta be a smart alec who asks afterward, “Sorry, what was that first one again?” It’s definitely what I’d do. 

Then comes the main event, which, for one specific person in the crowd, could be the command sequence: 

  1. Blue 

  2. Blue 

  3. Blue 

Breathtaking, no? Well, maybe you have to see the bigger picture. The whole idea of card stunts leverages the fact that the members of a stadium crowd sit in seats arranged in a grid. By holding up colored rectangular sign boards, they transform themselves into something like a big computer display screen. Each participant acts as a single picture element— person pixels! Shifts in which cards are being held up change the image or maybe even cause it to morph like a larger-than-life animated gif. 

Card stunts began as a crowd-participation activity at college sports in the 1920s. They became much less popular in the 1970s when it was generally agreed that everyone should do their own thing, man. In the 1950s, though, there was a real hunger to create ever more elaborate displays. Cheer squads would design the stunts by hand, then prepare individual instructions for each of a thousand seats. You’ve got to really love your team to dedicate that kind of energy. A few schools in the 1960s thought that those newfangled computer things might be helpful for taking some of the drudgery out of instruction preparation and they designed programs to turn sequences of hand-drawn images into individualized instructions for each of the participants. With the help of computers, people could produce much richer individualized sequences for each person pixel that said when to lift a card, what color to lift, and when to put it down or change to another card. So, whereas the questionnaire example from the previous section was about people making command sequences for the computer to follow, this example is about the computer making command sequences for people to follow. And computer support for automating the process of creating command sequences makes it possible to create more elaborate stunts. That resulted in a participant’s sequence of commands looking like:

  • up on 001 white 

  • 003 blue 

  • 005 white 

  • 006 red 

  • 008 white 

  • 013 blue 

  • 015 white 

  • 021 down 

  • up on 022 white 

  • 035 down 

  • up on 036 white 

  • 043 blue 

  • 044 down 

  • up on 045 white 

  • 057 metallic red 

  • 070 down

Okay, it’s still not as fun to read the instructions as to see the final product—in this actual example, it’s part of an animated Stanford “S.” To execute these commands in synchronized fashion, an announcer in the stadium calls out the step number (“Forty-one!”) and each participant can tell from his or her instructions what to do (“I’m still holding up the white card I lifted on 36, but I’m getting ready to swap it for a blue card when the count hits 43”). 

As I said, it’s not that complicated for people to be part of a card stunt, but it’s a pretty cool example of creating and following command sequences where the computer tells us what to do instead of the other way around. And, as easy as it might be, sometimes things still go wrong. At the 2016 Democratic National Convention, Hillary Clinton’s supporters planned an arena-wide card stunt. Although it was intended to be a patriotic display of unity, some attendees didn’t want to participate. The result was an unreadable mess that, depressingly, was supposed to spell out “Stronger Together.” 

These days, computers make it a simple matter to turn a photograph into instructions about which colors to hold up where. Essentially, any digitized image is already a set of instructions for what mixture of red, blue, and green to display at each picture position. One interesting challenge in translating an image into card-stunt instructions is that typical images consist of millions of colored dots (megapixels), whereas a card stunt section of a stadium has maybe a thousand seats. Instead of asking each person to hold up a thousand tiny cards, it makes more sense to compute an average of the colors in that part of the image. Then, from the collection of available colors (say, the classic sixty-four Crayola options), the computer just picks the closest one to the average. 

If you think about it, it’s not obvious how a computer can average colors. You could mix green and yellow and decide that the result looks like the spring green crayon, but how do you teach a machine to do that? Let’s look at this question a little more deeply. It’ll help you get a sense of how computers can help us instruct them better. Plus, it will be our entry into the exciting world of machine learning. 

There are actually many, many ways to average colors. A simple one is to take advantage of the fact that each dot of color in an image file is stored as the amount of red, green, and blue color in it. Each component color is represented as a whole number between 0 and 255, where 255 was chosen because it’s the largest value you can make with eight binary digits, or bits. Using quantities of red-blue-green works well because the color receptors in the human eye translate real-world colors into this same representation. That is, even though purple corresponds to a specific wavelength of light, our eyes see it as a particular blend of green, blue, and red. Show someone that same blend, and they’ll see purple. So, to summarize a big group of pixels, just average the amount of blue in those pixels, the amount of red in those pixels, and the amount of green in those pixels. That basically works. Now, it turns out, for a combination of physical, perceptual, and engineering reasons, you get better results by squaring the values before averaging, and square rooting the values after averaging. But that’s not important right now. The important thing is that there is a mechanical way to average a bunch of colored dots to get a single dot whose color summarizes the group. 

Once that average color is produced, the computer needs a way of finding the closest color to the cards we have available. Is that more of a burnt sienna or a red-orange? A typical (if imperfect) way to approximate how similar two colors are using their red-blue-green values is what’s known as the Euclidean distance formula. Here’s what that looks like as a command sequence:

  • take the difference between the amount of red in the two colors square it 

  • take the difference between the amount of blue in the two colors square it 

  • take the difference between the amount of green in the two colors square it add the three squares together 

  • take the square root

So to figure out what card should be held up to best capture the average of the colors in the corresponding part of the image, just figure out which of the available colors (blue, yellow green, apricot, timberwolf, mahogany, periwinkle, etc.) has the smallest distance to that average color at that location. That’s the color of the card that should be given to the pixel person sitting in that spot in the grid. 

The similarity between this distance calculation and the color averaging operation is, I’m pretty sure, just a coincidence. Sometimes a square root is just a square root. 

Stepping back, we can use these operations — color averaging and finding the closest color to the average — to get a computer to help us construct the command sequence for a card stunt. The computer takes as input a target image, a seating chart, and a set of available color cards, and then creates a map of which card should be held up in each seat to best reproduce the image. In this example, the computer mostly handles bookkeeping and doesn’t have much to do in terms of decision-making beyond the selection of the closest color. But the upshot here is that the computer is taking over some of the effort of writing command sequences. We’ve gone from having to select every command for every person pixel at every moment in the card stunt to selecting images and having the computer generate the necessary commands. 

This shift in perspective opens up the possibility of turning over more control of the command-sequence generation process to the machine. In terms of our 2 × 2 grid from chapter 1, we can move from telling (providing explicit instructions) to explaining (providing explicit incentives). For example, there is a variation of this color selection problem that is a lot harder and gives the computer more interesting work to do. Imagine that we could print up cards of any color we needed but our print shop insists that we order the cards in bulk. They can only provide us with eight different card colors, but we can choose any colors we want to make up that eight. (Eight is the number of different values we can make with 3 bits — bits come up a lot in computing.) So we could choose blue, green, blue-green, blue-violet, cerulean, indigo, cadet blue, and sky blue, and render a beautiful ocean wave in eight shades of blue. Great! 

But then there would be no red or yellow to make other pictures. Limiting the color palette to eight may sound like a bizarre constraint, but it turns out that early computer monitors worked exactly like that. They could display any of millions of colors, but only eight distinct ones on the screen at any one time. 

With this constraint in mind, rendering an image in colored cards becomes a lot trickier. Not only do you have to decide which color from our set of color options to make each card, just as before, but you have to pick which eight colors will constitute that set of color options. If we’re making a face, a variety of skin tones will be much more useful than distinctions among shades of green or blue. How do we go from a list of the colors we wish we could use because they are in the target image to the much shorter list of colors that will make up our set of color options? 

Machine learning, and specifically an approach known as clustering or unsupervised learning, can solve this color-choice problem for us. I will tell you how. But first let’s delve into a related problem that comes from turning a face into a jigsaw puzzle. As in the card-stunt example, we’re going to have the computer design a sequence of commands for rendering a picture. But there’s a twist—the puzzle pieces available for constructing the picture are fixed in advance. Similar to the dance-step example, it will use the same set of commands and consider which sequence produces the desired image.

This article originally appeared on Engadget at