SURVEILLANCE FIRM PALANTIR TAKES UP PENTAGON DEFENSE CONTRACT
Project Maven is a Pentagon defense contract which aims to use AI to create and deploy unmanned aerial vehicles (UAVs). Palantir recently took over the contract when Google was forced to abandon it last year due to major backlash from within the company. Many employees threatened to quit Google if the company continued to work on military products.
Of course the contract was happily picked up by another company Palantir. It’s founded by Peter Theil, cofounder of PayPal. Thiel pointed out the irony in Google’s decision to pull out of Project Maven but continue with Project Dragonfly which is a search project for China. He claimed the decision amounted to treason and should be investigated. We have highlighted Google’s questionable practices before and time and time again they demonstrate that they have something to hide.
As previously stated Project Maven is a lucrative defense contract with the Pentagon to build artificial intelligence programs that can analyze video feeds from aerial drones. It seems that the Defense Department has been discussing for years about winning wars with computer algorithms and artificial intelligence. That would seem to be the overall goal here.
However, during the Defense One Tech Summit that took place before the project was launched Marine Corps Col. Drew Cukor stated that “people and computers will work symbiotically to increase the ability of weapon systems to detect objects”. He added that “eventually we hope that one analyst will be able to do twice as much work, potentially three times as much, as they are doing now. That’s our goal.” Drew Cukor is the chief of the Algorithmic Warfare Cross-Function Team in the Intelligence, Surveillance and Reconnaissance Operations Directorate-Warfighter Support in the Office of the Undersecretary of Defense for Intelligence.
These statements highlight the main justification for Project Maven is to aid a workforce increasingly overwhelmed by incoming data, including millions of hours of video. It has become clear over the years that the department of defense must integrate artificial intelligence and machine learning to maintain a decisive advantage over competitors.
This project primarily focuses on computer vision which is a sub-category of machine learning and deep learning that autonomously extracts objects of interest from moving or still imagery. Given this information it is no wonder the Pentagon sought after Google’s help in the first place since they are a global leader in AI innovation.
GOOGLE PUBLISHES ITS AI ETHICAL PRINCIPLES FOLLOWING BACKLASH
After leaks indicated that Google was supplying AI technology to the Pentagon to analyze drone footage over 4,000 employees signed a petition demanding that Google cease work on Project Maven and promise to never again build warfare technology. Google has claimed time and time again that if you see something you think isn’t right speak up. They did, and the company listened. Google CEO Sundar Pichai wrote in a blog post that the company will not develop technologies or weapons that cause harm, or anything which can be used for surveillance violating “internationally accepted norms” or “widely accepted principles of international law and human rights.” Critics were quick to point out that the clause about “accepted norms” implies that Google intends to push the boundaries of what is considered acceptable with an ends justify the means mentality. This is exactly what Google does and intends to do going forward.
Google’s employees pressured the company to drop Project Maven.
After backing out of the Pentagon contract Google posted its key objectives for AI development. They are as follows:
– Be socially beneficial
– Avoid creating or reinforcing unfair bias
– Be built or tested for safety
– Incorporate privacy design principles
– Uphold high standards of scientific excellence
– Be made available for uses that accord with these principles.
A couple of these are fairly standard but others are rather ambiguous leaving Google a lot of wiggle room when it comes to deploying AI that could serve a “morally grey” purpose. What are your thoughts on these principles? How should Palantir proceed with their new defense contract? Leave your comments down below.