Biden's AI advisor Ben Buchanan said a method of clearly verifying White House releases is "in the works."
The White House wants to 'cryptographically verify' videos of Joe Biden so viewers don't mistake them for AI deepfakes::Biden's AI advisor Ben Buchanan said a method of clearly verifying White House releases is "in the works."
Digital signature as a means of non repudiation is exactly the way this should be done. Any official docs or releases should be signed and easily verifiable by any public official.
Would someone have a high level overview or ELI5 of what this would look like, especially for the average user. Would we need special apps to verify it? How would it work for stuff posted to social media
Depending on the implementation, there are two cryptographic functions that might be used (perhaps in conjunction):
Cryptographic hash: An arbitrary amount of data (like a video file) is used to create a “hash”—a shorter, (effectively) unique text string. Anyone can run the file through the same function to see if it produces the same hash; if even a single bit of the file is changed, the hash will be completely different and you’ll know the data was altered.
Public key cryptography: A pair of keys are created, one of which can only encrypt data (but can’t decrypt its own output), and the other, “public” key can only decrypt data that was encrypted by the first key. Users (like the White House) can post their public key on their website; then if a subsequent message purporting to come from that user can be decrypted using their public key, it proves it came from them.
The best way this could be handled is a green check mark near the video that you could click on it and it would give you all the meta data of the video (location, time, source, etc) with a digital signature (what would look like a random string of text) that you could click on and your browser would show you the chain of trust, where the signature came from, that it's valid, probably the manufacturer of the equipment it was recorded on, etc.
it would potentially be associated with a law that states that you must not misrepresent a “verified” UI element like a check mark etc, and whilst they could technically add a verified mark wherever they like, the law would prevent that - at least for US companies
it may work in the same way as hardware certifications - i believe that HDMI has a certification standard that cables and devices must be manufactured to certain specifications to bear the HDMI logo, and the HDMI logo is trademarked so using it without permission is illegal… it doesn’t stop cheap knock offs, but it means if you buy things in stores in most US-aligned countries that bear the HDMI mark, they’re going to work
For the average end-user, it would look like "https". You would not have to know anything about the technical background. Your browser or other media player would display a little icon showing that the media is verified by some trusted institution and you could learn more with a click.
In practice, I see some challenges. You could already go to the source via https, EG whitehouse.gov, and verify it that way. An additional benefit exists only if you can verify media that have been re-uploaded elsewhere. Now the user needs to check that the media was not just signed by someone (EG whitehouse.gov. ru), but if it was really signed by the right institution.
It needs some kind of handler, but we mostly have those in place. A web browser could be the handler for instance. A web browser has the green dot on the upper left, telling you a page is secure, that https is on and valid. This could work like that, the browser can verify the video and display a green or red dot in the corner, the user could just mouse over it/tap on it to see who it's verified to be from. But it's up to the user to mouse over it and check if it says whitehouse.gov or dr-evil-mwahahaha.biz
TL;DR: one day the user will see an overlay or notification that shows an image/movie is verified as from a known source. No extra software required.
Honestly, I can see this working great in future web browsers. Much like the padlock in the URL bar, we could see something on images that are verified. The image could display a padlock in the lower-left corner or something, along with the name of the source, demonstrating that it's a securely verified asset. "Normal" images would be unaffected. The big problem is how to put something on the page that cannot be faked by other means.
It's a little more complicated for software like phone apps for X or Facebook, but doable. The problem is that those products must choose to add this feature. Hopefully, losing reputation to being swamped with unverifiable media will be motivation enough to do so.
The underlying verification process is complex, but should be similar to existing technology (e.g. GPG). The key is that images and movies typically contain a "scratch pad" area in the file for miscellaneous stuff (metadata). This is where the image's author can add a cryptographic signature for the file itself. The user would never even know it's there.
Probably you'd notice a bit of extra time posting for the signature to be added, but that's about it, the responsibility for verifying the signature would fall to the owners of the social media site and in the circumstances where someone asks for a verification, basically imagine it as a libel case on fast forward, you file a claim saying "I never said that", they check signatures, they shrug and press the delete button and erase the post, crossposts, and if it's really good screencap posts and those crossposts of the thing you did not say but is still being attributed falsely to your account or person.
It basically gives absolute control of a person's own image and voice to themself, unless a piece of media is provable to have been made with that person's consent, or by that person themself, it can be wiped from the internet no trouble.
Where it comes to second party posters, news agencies and such, it'd be more complicated but more or less the same, with the added step that a news agency may be required to provide some supporting evidence that what they said is not some kind of misrepresentation or such as the offended party filing the takedown might be trying to insist for the sake of their public image.
Of course there could still be a YouTube "Stats for Nerds"-esque addin to the options tab on a given post that allows you to sign-check it against the account it's attributing something to, and a verified account system could be developed that adds a layer of signing that specifically identifies a published account, like say for prominent news reporters/politicians/cultural leaders/celebrities, that get into their own feed so you can look at them or not depending on how ya be feelin' that particular scroll session.
i wouldn’t say signature exactly, because that ensures that a video hasn’t been altered in any way: no re-encoded, resized, cropped, trimmed, etc… platforms almost always do some of these things to videos, even if it’s not noticeable to the end-user
there are perceptual hashes, but i’m not sure if they work in a way that covers all those things or if they’re secure hashes. i would assume not
perhaps platforms would read the metadata in a video for a signature and have to serve the video entirely unaltered if it’s there?
You don't need to bother with cryptographically verifying downstream videos, only the source video needs to be able to be cryptographically verified. That way you have an unedited, untampered cut that can be verified to be factually accurate to the broadcast.
The White House could serve the video themselves if they so wanted to. Just use something similar to PGP for signature validation and voila. Studios can still do all the editing, cutting, etc - it shouldn't be up to the end user to do the footwork on this, just for the studios to provide a kind of 'chain of custody' - they can point to the original verification video for anyone to compare to; in order to make sure alterations are things such as simple cuts, and not anything more than that.
Rather that using a hash of the video data, you could just include within the video the timestamp of when it was originally posted, encrypted with the White House’s private key.
Apple's scrapped on-device CSAM scanning was based on perceptual hashes.
The first collision demo breaking them showed up in hours with images that looked glitched. After just a week the newest demos produced flawless images with collisions against known perceptual hash values.
In theory you could create some ML-ish compact learning algorithm and use the compressed model as a perceptual hash, but I'm not convinced this can be secure enough unless it's allowed to be large enough, as in some % of the original's file size.
Very few people understand why a GPG signature is reliable or how to check it. Malicious actors will add a "GPG Signed" watermark to their fake videos and call it a day, and 90% of victims will believe it.
Yeah but all it takes is proving it doesn't have the right signature and you can make the Social Media corpo take every piece of media with that signature just for that alone.
What's even better is that you can attack entities that try to maliciously let people get away with misusing their look and fake being signed for failing to defend their IP, basically declaring you intend to take them to court to Public Domainify literally everything that makes them any money at all.
If billionaires were willing to allow disinformation as a service then they wouldn't have gone to war against news as a service to make it profitable to begin with.
I just mentioned this in another comment tonight; cryptographic verification has existed for years but basically no one has adopted it for anything. Some people still seem to think pasting an image of your handwriting on a document is "signing" a document somehow.
The average Joe won't know what any of what you just said means. Hell, the Joe in the OP doesn't know what any of you just said means. There's no way (IMO) of simultaneously creating a cryptographic assurance and having it be accessible to the layman.
There is, but only if you can implement a layer of abstraction and get them to trust that layer of abstraction.
Few laymen understand why Bitcoin is secure. They just trust that their wallet software works and because they were told by smarter people that it is secure.
Few laymen understand why TLS is secure. They just trust that their browser tells them it is secure.
Few laymen understand why biometric authentication on their phone apps is secure. They just trust that their device tells them it is secure.
Bingo. If, at the limit, the purpose of a generative AI is to be indistinguishable from human content, then watermarking and AI detection algorithms are absolutely useless.
The ONLY means to do this is to have creators verify their human-generated (or vetted) content at the time of publication (providing positive proof), as opposed to attempting to retroactively trying to determine if content was generated by a human (proving a negative).
Idk, making CP where a child is raped vs making CP where no children are involved seem on very different levels of bad to me.
Both utterly repulsive, but certainly not exactly the same.
One has a non-consenting child being abused, a child that will likely carry the scars of that for a long time, the other doesn't. One is worse than the other.
E: do the downvoters like... not care about child sexual assault/rape or something? Raping a child and taking pictures of it is very obviously worse than putting parameters into an AI image generator. Both are vile. One is worse. Saying they're equally bad is attributing zero harm to the actual assaulting children part.
It could work the same way the padlock icon worked for SSL sites in browsers back in the day. The video player checks the signature and displays the trusted icon.
I mean, how is anyone going to crytographically verify a video? You either have an icon in the video itself or displayed near it by the site, meaning nothing, fakers just copy that in theirs. Alternatively you have to sign or make file hashes for each permutation of the video file sent out. At that point how are normal people actually going to verify? At best they're trusting the video player of whatever site they're on to be truthful when it says that it's verified.
Saying they want to do this is one thing, but as far as I'm aware, we don't have a solution that accounts for the rampant re-use of presidential videos in news and secondary reporting either.
I have a terrible feeling that this would just be wasted effort beyond basic signing of the video file uploaded on the official government website, which really doesn't solve the problem for anyone who can't or won't verify the hash on their end.
Maybe some sort of visual and audio based hash, like musicbrainz ids for songs that are independant of the file itself but instead on the sound of it. Then the government runs a server kind of like a pgp key server. Then websites could integrate functionality to verify it, but at the end of the day it still works out to a "I swear we're legit guys" stamp for anyone not techinical enough to verify independantly thenselves.
I guess your post just seemed silly when the end result of this for anyone is effectively the equivalent of your "signed by trump" image, unless the public magically gets serious about downloading and verifying everything themselves independently.
Fuck trump, but there are much better ways to shit on king cheeto than pretending the average populace is anything but average based purely on political alignment.
You have to realize that to the average user, any site serving videos seems as trustworthy as youtube. Average internet literacy is absolutely fucking abysmal.
That's not the point. It's that malicious actors could easily exploit that lack of knowledge to trick users into giving fake videos more credibility.
If I were a malicious actor, I'd put the words "✅ Verified cryptographically by the White House" at the bottom of my posts and you can probably understand that the people most vulnerable to misinformation would probably believe it.
Just make it a law that if as a social media company you allow unverified videos to be posted, you don't get safe harbour protections from libel suits for that. It would clear right up. As long as the source of trust is independent of the government or even big business, it would work and be trustworthy.
Back in the day, many rulers allowed only licensed individuals to operate printing presses. It was sometimes even required that an official should read and sign off on any text before it was allowed to be printed.
Freedom of the press originally means that exactly this is not done.
I honestly do not see the value here. Barring maybe a small minority, anyone who would believe a deepfake about Biden would probably also not believe the verification and anyone who wouldn't would probably believe the administration when they said it was fake.
The value of the technology in general? Sure. I can see it having practical applications. Just not in this case.
Sounds like a very Biden thing (or for anyone well into their Golden Years) to say, "Use cryptography!" but it's not without merit. How do we verify file integrity? How to we digitally sign documents?
The problem we currently have is that anything that looks real tends to be accepted as real (or authentic). We can't rely on humans to verify authenticity of audio or video anymore. So for anything that really matters we need to digitally sign it so it can be verified by a certificate authority or hashed to verify integrity.
This doesn't magically fix deep fakes. Not everyone will verify a video before distribution and you can't verify a video that's been edited for time or reformatted or broadcast on the TV. It's a start.
We've had this discussion a lot in the Bitcoin space. People keep arguing it has to change so that "grandma can understand it" but I think that's unrealistic. Every technology has some inherent complexities that cannot be removed and people have to learn if they want to use it. And people will use it if the motivation is there. Wifi has some inherent complexities people have become comfortable with. People know how to look through lists of networks, find the right one, enter the passkey or go through the sign on page. Some non-technical people know enough about how Wifi should behave to know the internet connection might be out or the route might need a reboot. None of this knowledge was commonplace 20 years ago. It is now.
The knowledge required to leverage the benefits of cryptographic signatures isn't beyond the reach of most people. The general rules are pretty simple. The industry just has to decide to make the necessary investments to motivate people.
The President's job isn't really to be an expert on everything, the job is more about being able to hire people who are experts.
If this was coupled with a regulation requiring social media companies to do the verification and indicate that the content is verified then most people wouldn't need to do the work to verify content (because we know they won't).
It obviously wouldn't solve every problem with deepfakes, but at least it couldn't be content claiming to be from CNN or whoever. And yes someone editing content from trusted sources would make that content no longer trusted, but that's actually a good thing. You can edit videos to make someone look bad, you can slow it down to make a person look drunk, etc. This kind of content should not considered trusted either.
Someone doing a reaction video going over news content or whatever could have their stuff be considered trusted, but it would be indicated as being content from the person that produced the reaction video not as content coming from the original news source. So if you see a "news" video that has it's verified source as "xXX_FlatEarthIsReal420_69_XXx" rather than CNN, AP News, NY Times, etc, you kinda know what's up.
The number of 80 year olds that know what cryptography is AND know that it's a proper solution here is not large. I'd expect an 80 year old to say something like "we should only look at pictures sent by certified mail" or "You cant trust film unless it's an 8mm and the can was sealed shut!"
It would be nice if none of this was necessary... but we don't live in that world. There is a lot of straight up bullshit in the news these days especially when it comes to controversial topics (like the war in Gaza, or Covid).
You could go a really long way by just giving all photographers the ability to sign their own work. If you know who took the photo, then you can make good decisions about wether to trust them or not.
Random account on a social network shares a video of a presidential candidate giving a speech? Yeah maybe don't trust that. Look for someone else who's covered the same speech instead, obviously any real speech is going to be covered by every major news network.
That doesn't stop a ordinary people from sharing presidential speeches on social networks. But it would make it much easier to identify fake content.
Once people get used to cryptographical signed videos, why only trust one source? If a news outlet is found signing a fake video, they will be in trouble. Loss of said trust if nothing else.
We should get to the point we don't trust unsigned videos.
Not trusting unsigned videos is one thing, but will people be judging the signature or the content itself to determine if it is fake?
Why only one source should be trusted is a salient point. If we are talking trust: it feels entirely plausible that an entity could use its trust (or power) to manufacture a signature.
And for some it is all too relevant that an entity like the White House, (or the gambit of others, past or present), have certainly presented false informstion as true to do things like invade countries.
Trust is a much more flexible concept that is willing to be bent. And so cryptographic verification really has to demonstrate how and why something is fake to the general public. Otherwise it is just a big 'trust me bro.'
This doesn’t solve anything. The White House will only authenticate videos which make the President look good. Curated and carefully edited PR. Maybe the occasional press conference. The vast majority of content will not be authenticated. If anything this makes the problem worse, as it will give the President remit to claim videos which make them look bad are not authenticated and should therefore be distrusted.
It needs to be more general. A video should have multiple signatures. Each signature relies on the signer's reputation, which works both ways. It won't help those who don't care about their reputation, but will for those that do.
A photographer who passes off a fake photo as real will have their reputation hit, if they are caught out. The paper that published it will also take a hit. It's therefore in the paper's interest to figure out how trustworthy the supplier is.
I believe canon recently announced a camera that cryptographically signs photographs, at the point of creation. At that point, the photographer can prove the camera, the editor can prove the photographer, the paper can prove the editor, and the reader can prove the newspaper. If done right, the final viewer can also prove the whole chain, semi-independently. It won't be perfect (far from it) but might be the best will get. Each party wants to protect their reputation, and so has a vested interest in catching fraud.
For this to work, we need a reliable way to sign images multiple times, as well as (optionally) encode an edit history into it. We also need a quick way to match cryptographic keys to a public key.
An option to upload a time stamped key to a trusted 3rd party would also be of significant benefit. Ironically, Blockchain might actually be a good use for this. In case a trusted 3rd can't be established.
Great points and I agree. I also think the signature needs to be built into the stream in a continuous fashion so that snippets can still be authenticated.
I don't think that's practical or particularly desirable.
Today, when you buy something, EG a phone, the brand guarantees the quality of the product, and the seller guarantees the logistics chain (that it's unused, not stolen, not faked, not damaged in transport, ...). The typical buyer does not care about the parts used, the assembly factory, etc.
When a news source publishes media, they vouch for it. That's what they are paid for (as it were). If the final viewer is expected to check the chain, they are asked to do the job of skilled professionals for free. Do-your-own-research rarely works out, even for well-educated people. Besides, in important cases, the whole chain will not be public to protect sources.
I've thought about this too but I'm not sure this would work. First you could hack the firmware of a cryptographically signed camera. I already read something about a camera like this that was hacked and the private key leaked. You could have an individual key for each camera and then revoke it maybe.
But you could also photograph a monitor or something like that, like a specifically altered camera lens.
Ultimately you'd probably need something like quantum entangled photon encoding to prove that the photons captured by the sensor were real photons and not fake photons. Like capturing a light field or capturing a spectrum of photons. Not sure if that is even remotely possible but it sounds cool haha.
I don't understand your concern. Either it'll be signed White House footage or it won't. They have to sign all their footage otherwise there's no point to this. If it looks bad, don't release it.
The point is that if someone catches the President shagging kids, of course that footage won't be authenticated by the WH. We need a tool so that a genuine piece of footage of the Pres shagging kids would be authenticated, but a deepfake of the same would not. The WH is not a good arbiter since they are not independent.
Then this exercise is a waste of time. All the hard hitting journalism which presses the President and elicits a negative response will be unsigned, and will be distributed across social media as it is today: without authentication. All the videos for which the White House is concerned about authenticity will continue to circulate without any cause for contention.
Anyone can digitally sign anything (maybe not easily or for free). The Whitehouse can verify or not verify whatever they choose but if you, as a journalist let's say, want to give credence to video you distribute you'll want to digitally sign it. If a video switches hands several times without being signed it might as well have been cooked up by the last person that touched it.
Signatures aren't meant to prove authenticity. They're proving the source which you can use to weigh the authenticity.
I think the confusion comes from the fact that cryptographic signatures are mostly used in situations where proving the source is equivalent to proving authenticity. Proving a text message is from me proves the authenticity as there's no such thing as doctoring my own text message. There's more nuance when you're using signatures to prove a source which may or may not be providing trustworthy data. But there is value in at least knowing who provided the data.
You mean to tell me that cryptography isn't the enemy and that instead of fighting it in the name of "terrorism and child protection" that we should be protecting children by having strong encryption instead??
Why not just official channels of information, e.g. White house Mastodon instance with politicians' accounts, government-hosted, auto-mirrored by third parties.
Should probably start out with the colour mixing one. That was very helpfull for me to figure out public key cryptography. The difficulty comes in when they feel like you are treating them like toddlers so they start behaving more like toddlers. (Which they are 99% if the time)
I see no difference between creating a fake video/image with AI and Adobe's packages. So to me this isn't an AI problem, it's a problem that should have been resolved a couple of decades ago.
Or more likely they will only discredit fake news and not verify actual footage that is a poor reflection. Like a hot mic calling someone a jackass, white House says no comment.
I think this is a great idea. Hopefully it becomes the standard soon, cryptographically signing clips or parts of clips so there's no doubt as to the original source.
When it comes to misinformation I always remember when I was a kid I'm the early 90s, another kid told me confidently that the USSR had landed on Mars, gathered rocks, filmed it and returned to earth(it now occurs to me that this homeschooled kid was confusing the real moon landing.) I remember knowing it was bullshit but not having a way to check the facts. The Internet solved that problem. Now, by God , the Internet has recreated the same problem.
Ultimately, reputation based trust, combined with cryptographic keys is likely the best we can do. You (semi automatically) sign the photo, and upload it's stamp to a 3rd party. They can verify that they received the stamp from you, and at what time. That proves the image existed at that time, and that it's linked to your reputation. Anything more is just likely to leak, security wise.
Probably a signed comment from the Double-Cone Crusader himself, basically free PR so I don't see why he or any other president wouldn't at least have an intern give you a signed comment fist bump of acknowledgement
We need something akin to the simplicity and ubiquity of Google that does this, government funded and with transparent oversight. We're past the point of your aunt needing a way to quickly check if something is obvious bullshit.
Call it something like Exx-Ray, the two Xs mean double check - "That sounds very unlikely that they said that Aunt Pat... You need to Exx-Ray shit like that before you talk about it at Thanksgiving"
Or same thing, but with the word Check, CHEXX - "No that sounds like bullshit, I'm gonna CHEXX it... Yup that's bullshit, Randy."
I've always thought that bank statements should require cryptographic signatures for ledger balances. Same with individual financial transactions, especially customer payments.
Without this we're pretty much at the mercy of trust with banks and payment card providers.
I imagine there's a lot of integrity requirements for financial transactions on the back end, but the consumer has no positive proof except easily forged statements.
Yeah but that would require banks to actually invest money to improve customer trust... Not something banks are very interested in, really. It's easier and cheaper to just have the marketing department come up with some nonsense claim and advertise that instead.
I can totally see this being a thing and I kinda wish it would just because I love old people trying to seem like they know tech when they don't but in the context of still helpful tech stuff.
I've been saying for a long time now that camera manufacturers should just put encryption circuits right inside the sensors. Of course that wouldn't protect against pointing the camera at a screen showing a deepfake or someone painstakingly dissolving top layers and tracing out the private key manually, but that'd be enough of the deterrent from forgery. And also media production companies should actually put out all their stuff digitally signed. Like, come on, it's 2024 and we still don't have a way to find out if something was filmed or rendered, cut or edited, original or freebooted.
Oh, they've actually been developing that! Thanks for the link, I was totally unaware of C2PA thing. Looks like the ball has been very slowly rolling ever since 2019, but now that the Google is on board (they joined just a couple days ago), it might fairly soon be visible/usable by ordinary users.
Mark my words, though, I'll bet $100 that everyone's going to screw it up miserably on their first couple of generations. Camera manufacturers are going to cheap out on electronics, allowing for data substitution somewhere in the pipeline. Every piece of editing software is going to be cracked at least a few times, allowing for fake edits. And production companies will most definitely leak their signing keys. Maybe even Intel/AMD could screw up again big time. But, maybe in a decade or two, given the pace, we'll get a stable and secure enough solution to become the default, like SSL currently is.
If you've been saying this for a long time please stop. This will solve nothing. It will be trivial to bypass for malicious actors and just hampers normal consumers.
Thank you, lol. This is what people end up with when they think of the first solution that comes to mind. Often just something that makes life harder for everyone EXCEPT bad actors. This just creates hoops for people following the rules to jump though while giving the impression the problem was solved, when it's not.
You must be severely misunderstanding the idea. The idea is not to encrypt it in a way that it's only unlockable by a secret and hidden key, like DRM or cable TV does, but to do the the reverse - to encrypt it with a key that is unlockable by publicly available and widely shared key, where successful decryption acts as a proof of content authenticity. If you don't care about authenticity, nothing is stopping you from spreading the decrypted version, so It shouldn't affect consumers one bit. And I wouldn't describe "Get a bunch of cameras, rip the sensors out, carefully and repeatedly strip the top layers off and scan using electron microscope until you get to the encryption circuit, repeat enough times to collect enough scans undamaged by the stripping process to then manually piece them together and trace out the entire circuit, then spend a few weeks debugging it in a simulator to work out the encryption key" as "trivial"
I'll be talking about digital signatures which is the basis for such things. I assume basic understanding of asymmetric cryptography and hashing.
Basically, you hash the content you want to verify with a secure hashing function and encrypt the value with your private key. You can now append this encrypted value to the content or just release it alongside it.
To now verify this content they can use your public key to decrypt your signature and get the original hash value, and compare it to their own. To get that, they just need to hash the content themselves with the same function.
So by signing their videos with the white house private key and publishing their public key somewhere, you can verify the video's authenticity like that.
Only RSA uses a function equivalent to encryption when producing signatures, and only when used in one specific scheme. Every other algorithm has a unique signing function.
Click the padlock in your browser, and you'll be able to see that this webpage (if you're using lemmy.world) was encrypted by a server that has been verified by Google Trust Services to be a server which is controlled by lemmy.world. In addition, your browser will remember that... and if you get a page from the same server that has been verified by another cloud provider, the browser (should) flag that and warn you it might be
The idea is you'll be able to view metadata on an image and see that it comes from a source that has been verified by a third party such as Google Trust Services.
How it works, mathematically... well, look up "asymmetric cryptography and hashing". It gets pretty complicated and there are a few different mathematical approaches. Basically though, the white house will have a key, that they will not share with anyone, and only that key can be used to authorise the metadata. Even Google Trust Services (or whatever cloud provider you use) does not have the key.
There's been a lot of effort to detect fake images, but that's really never going to work reliably. Proving an image is valid, however... that can be done with pretty good reliability. An attack would be at home on Mission Impossible. Maybe you'd break into a Whitehouse photographer's home at night, put their finger on the fingerprint scanner of their laptop without waking them, then use their laptop to create the fake photo... delete all traces of evidence and GTFO. Oh and everyone would know which photographer supposedly took the photo, ask them how they took that photo of Biden acting out of character, and the real photographer will immediately say they didn't take the photo.
I'm more interested in how exactly you'd implement something like this.
It's not like videos viewed on tiktok display a hash for the file you're viewing; and users wouldn't look at that data anyway, especially those that would be swayed by a deep fake...
Like you said, the issue is in verification by the end-user. It is trivial to provide a digitally signed (and timestamped) file. It is also trivial to provide trusted tools to verify these files. It is immensely difficult to provide a solution user will care about; which is why more often than not the most people asks companies in the data authenticity business is "can we show a green check on screen? That would be perfect!".
And we end up with something that nobody checks beyond the "it's probably ok" phase. If the goal is to teach the masses about trusting their source, either they have a miracle solution, or it just won't work.
(and all that is assuming people actually care about checking the authenticity of the stuff they see, which is not a norm as it is…)
Digital signature. A watermark may be useful so that an unauthorized user can’t easily hide their source without noticeably defacing the photo, but it doesn’t prevent anyone from modifying it
A digital signature is a somewhat similar idea except that signature verification fails if there are any changes. This is tough to do with a photograph, where some applications may be blindly re-encoding or modifying the resolution so those may need to be fixed.
You could argue this is a good use case for blockchain, certainly much better than those stupid monkey images. When John Stewart parodies a politician, there should be a verifiable chain of evidence from the White House release to the news bureau to his studio, before they alter the lighting to highlight orange skin tone for yucks.
The White House is increasingly aware that the American public needs a way to tell that statements from President Joe Biden and related information are real in the new age of easy-to-use generative AI.
Big Tech players such as Meta, Google, Microsoft, and a range of startups have raced to release consumer-friendly AI tools, leading to a new wave of deepfakes — last month, an AI-generated robocall attempted to undermine voting efforts related to the 2024 presidential election using Biden's voice.
Yet, there is no end in sight for more sophisticated new generative-AI tools that make it easy for people with little to no technical know-how to create fake images, videos, and calls that seem authentic.
Ben Buchanan, Biden's Special Advisor for Artificial Intelligence, told Business Insider that the White House is working on a way to verify all of its official communications due to the rise in fake generative-AI content.
While last year's executive order on AI created an AI Safety Institute at the Department of Commerce tasked with creating standards for watermarking content to show provenance, the effort to verify White House communications is separate.
Ultimately, the goal is to ensure that anyone who sees a video of Biden released by the White House can immediately tell it is authentic and unaltered by a third party.
The original article contains 367 words, the summary contains 218 words. Saved 41%. I'm a bot and I'm open source!
It's way more feasible to simply require social media sites to do the verification and display something like a blue check on verified videos.
This is actually a really good idea. Sure there will still be deepfakes out there, but at least a deepfake that claims to be from a trusted source can be removed relatively easily.
Theoretically a social media site could boost content that was verified over content that isn't, but that would require social media sites to not be bad actors, which I don't have a lot of hope in.
I agree that it’s a good idea. But the people most swayed by deepfakes of Biden are definitely the least concerned with whether their bogeyman, the “deep state” has verified them
Positioning using distance bounded challenge-response protocols with multiple beacons is possible, but none of the positioning satellite networks supports it. And you still can't prove the photo was taken at the location, only that somebody was there.
A link to the video could be shared via ActivityPub.
The video would be loaded over HTTPS; we can verify that the video is from the white house, and that it hasn't been modified in-transit.
A big issue is that places don't want to share a link to an independently verifiable video, they want you to load a copy of it from their website/app. This way we build trust with the brand (e.g. New York Times), and spend more time looking at ads or subscribe. @stockRot@technology
No, all you need for this is a digital signature and to publish the public key on an official government website. And maybe for platforms like YouTube and TikTok to integrate check status in their UI (e.g. flag any footage of candidates that was not signed by the government private key as "unverified").
Don’t need to involve a blockchain to make cryptographically provable authenticity. Just a digital signature.
The only thing a hash in a blockchain would add is proof the video existed at the time the hash was added to the blockchain. I can think of cases where that would be beneficial too, but it wouldn’t make sense to put a hash of every video on a public blockchain.
Anybody can also verify it if they just host the hash on their own website, or host the video itself.
Getting the general populace to understand block chain implementations or how to interface with it is an unrealistic task
What does a distributed zero trust model add to something that is inherently centralized requiring trust in only 1 party
Blockchain is the opposite of what you want for this problem, I'm not sure why people bring this up now. People need to take an introductory cryptography course before saying to use blockchain everywhere.
Putting it on the blockchain ensures you can always go back and say "see, at this date/time, this key verified this file/hash".. If you know the key of the uploader (the white house), you can verify it was signed by that key. Guatemala used a similar scheme to verify votes in elections using Bitcoin. Could the precinct lie and put in the wrong vote count? Of course! But what it prevented was somebody saying "well actually the precinct reported a different number" since anybody could verify that on chain they didn't. It also prevented the precinct themselves from changing the number in the future if they were put under some kind of pressure.
Wouldn't this be defeated by people re-uploading the video? I think all these sites will re-encode the videos uploaded so the hash will not match, then people will use that as proof that the video is not real.
Tinfoil hat time. It's probably because they need to start creating AI videos to show he's 'competent and coherent' and they'll say their 'tests proves that it's a real video not a fake. And since the government said it's true, morons will believe it.
If Trump is any indication, no politician will ever need to be 'competent and coherent' ever again, constituents would vote in a literal corpse if it had a sign propped on it saying "gays bad"
Think of generating an md5sum to verify that the file you downloaded online is what it should be and hasn't been corrupted during the download process or replaced in a Man in the Middle attack.