Artificial Intelligence (AI) is basically defined as the field of science concerned with building computers and machines that can reason, learn, and act in such a way that would normally require human intelligence or that involves data whose scale exceeds what humans can analyze.
I’m sure you’ve all heard of ChatGPT, CoPilot, Grammarly, Fireflies, Chatbot, etc. Perhaps you’ve even tried some of them. If you’re a college student, you’ve likely used one of the AI programs to write a paper for you, and then run HumanizeAI over the output so that it looks as if the paper was written by a human and not a machine.
The fascist regime wants to build HUGE data centers to process all this AI stuff. And while there is a place for AI, there is a lot more bad than good, although adoption looks inevitable, in large part because too many people are lazy.
We’ll come back to that.
There are two major problems with the proposed development. The first is that data centers need a MASSIVE amount of power. Generally, they use electricity. Now, electricity can come from all sorts of sources, but then it needs to go through the grid. Actually, one of the four major grids: Eastern, Western, Texas and Alaska. Most of our grids were built in the 1960’s and 70’s, and are woefully inadequate for our current needs, and that’s BEFORE we add more data centers.
Where will the power come from? Even if there were “more” electrical sources, the grid couldn’t handle them. We know that there are people who generate solar-powered electricity for their homes and businesses, and they’re supposed to be able to send any excess to their local electric company’s grid. This company has an article defining the problem, which is that there is a dearth of capacity, so people cannot sell their excess power back to the grid. Their analysis is solid, but remember they have a product to sell to help solve the problem.
Kevin O’Leary sees the problem, and thinks the solution lies in gas turbines. In Alberta. Full deets.
There you have the first major problem with that $500 billion (with a “b”) data center wet dream of the Orange Menace. POWER! How much power? This much.
But there’s more. Or, more accurately, less. It’s better AI, for less development cost, and a smaller power footprint, and fewer chips. It’s Chinese. It’s free to use. It’s DeepSeek.
This is analogous to the US car manufacturing predicament. What does Detroit want to build? BIGGER! More expensive! Heavier! More Features. More profit for them. What do consumers want? Safe, reliable cars that can get them from point A to point B and back again. It shows in the numbers. Sorry, I digressed.
Back to DeepSeek. Its release tanked the market, especially Nvidia. If you can do the same amount of processing with fewer chips, using less power, it creates a black hole for standard American inefficiency.
It fills me with great joy when Von Shiteznpants can’t get what he wants.
Back to the good and bad of AI.
As I mentioned, college students are using AI to write their papers. I cannot understate what a disaster this is. First, if AI writes a paper, the student has learned NOTHING beyond how to search a topic, and then cut and paste a document. This means that kids will be able to graduate and know NOTHING. Not in their major, nor even in any of their gut courses. There is a simple solution, that no one but me likes because its diametrically opposed to “lazy”.
The solution is “Blue Books”. They require that someone use a pen or pencil and WRITE their exam answers. Or their essay. And then the professor needs to read what will likely be horrific penmanship, but still, original work.
I know professionals who use AI for proposals, reports, and other documents. Thinking things through, it means when a prospective client receives proposals from multiple companies vying for the contract, ALL the proposals could easily be identical except for the name, address and email address of the submitter. Reports will be comprised of aggregated data, and not what actually occurred.
There are other bad uses for AI - these involve bad actors, deep fakes, propaganda and all the rest that you’re already worried about.
There are some good uses for AI, although they too can be leveraged for evil. For example, AI can be used to time traffic lights and help traffic flow more smoothly, especially with the lights that restrict entry to highways. Sounds good. It really does. However, a bad actor can utilize that same AI program to cause traffic accidents and traffic jams. All depends on who is doing what to who.
I had a mammogram last December, and the report came from AI. Unlike previous mammograms, this one told me my lifetime risk of breast cancer, and the tissue type of breast tissue. That latter point is actually useful, because if breast tissue is dense (as determined by AI) a patient may need other studies because a standard mammogram can miss early tumors. I worry about that “lifetime risk” number — what if AI says a woman has a 70% of breast cancer: should she have a double mastectomy “just in case”? Will she spend the rest of her life worried? The obvious solution is that the data should go to a clinician who can work with the patient and help her make the best decision, as opposed to just sending out the number to the patient. And, of course, there’s the issue of trusting that the AI data is correct.
Finally, let’s think about the use of AI in self-driving cars. I am personally opposed to self-driving cars because I drive a stick shift car. Every car I have ever bought has been a standard. I love d-r-i-v-i-n-g. I love little sports cars. Winding roads, wind in my hair, the control over the different gears. YUM!
But that’s personal. I object to self-driving cars because everything I’ve ever read about them indicates that they make all sorts of errors. So overlay with a “thinking” computer (which has never, itself, driven anything) determining speed, turns, when to stop, any obstructions.
Does anyone else see a problem?
The energy consumption is certainly a large concern but there is also the concern that machines and programs that provide answers are controlled by the owner of the AI model(s) therefore subject to biases as we watched with X/Twitter. It is no longer a free space to exchange information.
Orwell told us about The Ministry of in 1984. Turns out he was right just 40 years ahead
Just a more powerful weapon to lead us on the road to perdition.