This morning’s presentation at 79°West by Mark McNeilly, “AI in Business,” was excellent and thought-provoking.
Here are his slides: ChatGPT & AI Preso – Innovate Carolina Handout – Prof Mark McNeilly
I spent a few hours exploring only some of the things that I jotted down in my notes, and I thought you might be interested.
Mark McNeilly’s Articles
Mark McNeilly’s articles on AI (you may have to give your email to get access)
https://markmcneilly.substack.com/
The Pace of Change
A quote by Vladimir Lenin, the Russian revolutionary:
“There are decades when nothing happens; and there are weeks when decades happen.”
“Kurzweil suggests that the progress of the entire 20th century would have been achieved in only 20 years at the rate of advancement in the year 2000—in other words, by 2000, the rate of progress was five times faster than the average rate of progress during the 20th century. He believes another 20th century’s worth of progress happened between 2000 and 2014 and that another 20th century’s worth of progress will happen by 2021, in only seven years. A couple decades later, he believes a 20th century’s worth of progress will happen multiple times in the same year, and even later, in less than one month. All in all, because of the Law of Accelerating Returns, Kurzweil believes that the 21st century will achieve 1,000 times the progress of the 20th century.” — WaitButWhy
ChatGPT 4 “kinda sucks”?
ChatGPT-4 vs 3.5:
- GPT-4 has 400 billion parameters, compared to GPT-3.5’s 175 billion.
- It offers more accurate responses and a better understanding of complex queries.
- GPT-4 features a larger context window capable of processing over 300 pages of text.
- Integration of GPT-4 into services like Quora’s Poe Subscriptions and Microsoft’s Bing search engine.
- GPT-4 is 82% less likely to respond to disallowed prompts and 60% less likely to fabricate facts than GPT-3.54.
ChatGPT-5 vs 4:
Sam Altman hinted at the development of ChatGPT-5, suggesting significant improvements over GPT-45. He mentioned that GPT-4 “kind of sucks” and expects a substantial leap to GPT-5. Altman discussed challenges such as energy requirements and AI chip procurement for future AI models. No official release date for ChatGPT-5, but it is expected this year.
https://www.msn.com/en-ph/news/other/sam-altman-hints-at-gpt-5-says-gpt-4-sucks/ar-BB1ko8hY
Claude 2 > ChatGPT 4, maybe
Several sites say Claude is better, but it does not give a connection to the internet, so it isn’t able to look up anything. I think this is highly limiting.
https://anakin.ai/blog/chatgpt-4-vs-claude-sonnet/
https://lifehacker.com/tech/claude-ai-versus-chatgpt-which-is-better
Levels of AI
ANI << AGI << ASI
Artificial Narrow Intelligence (ANI):
- Also known as Weak AI, is a type of artificial intelligence that is designed to perform a single or a limited range of tasks.
- Task-Specific: ANI systems are specialized to handle specific tasks, such as voice recognition, image recognition, or playing chess.
- Limited Understanding: Unlike humans, ANI lacks a general understanding of the world. It operates within a predefined set of rules and cannot go beyond its programming.
- No Self-Awareness: ANI does not possess consciousness or self-awareness. It simulates human behavior based on a narrow range of parameters.
- Current AI: Most AI systems today, including chatbots, recommendation systems, and autonomous vehicles, are examples of ANI.
- Evolutionary Step: ANI is considered the first step in AI development, preceding more advanced forms like AGI (Artificial General Intelligence) and ASI (Artificial Superintelligence).
Artificial General Intelligence (AGI):
- Also known as Strong AI.
- AGI can perform any intellectual task that a human being can.
- It has the ability to learn, understand, and apply knowledge in different domains.
- AGI is flexible and adaptable, not limited to a single task.
Artificial Superintelligence (ASI):
- ASI is a level of intelligence that exceeds the brightest and most gifted human minds in every field, including scientific creativity, general wisdom, and social skills.
- ASI would be capable of better decision-making and problem-solving than humans.
- It represents an evolution of AGI, with capabilities that are not just quantitatively, but also qualitatively, beyond human abilities.
- In essence, while AGI matches human intelligence and capability, ASI surpasses it to a degree that is difficult to comprehend from our current standpoint.
Runway AI Video-Generator
https://runwayml.com (not ready for prime time, I think)
Glass AI
A technology company that specializes in creating an AI-powered business intelligence platform,
Prompt Engineering
- AI will not replace you. Someone who can use AI will replace you.
- “To ask the proper question is half of knowing.” – Roger Bacon
- “The illiterate of the twenty-first century will not be those who cannot read and write, but those who cannot learn, unlearn, and relearn.” — Alvin Toffler
Tips for prompt engineering:
- https://www.youtube.com/watch?v=tZ8e8h2Lqtc (some self-promotion, but 5 minutes)
- https://www.youtube.com/watch?v=jC4v5AS4RIM (the 6 components in a throrough prompt)
CoPilot Brand is confusing
CoPilot is Microsoft’s version of ChatGPT (MS owns about 1/2 of OpenAI’s ChatGPT) but Copilot is used in branding for other things, like GitHub Copilot, and Sales Copilot
Kranzberg’s Laws
Kranzberg’s laws are a series of adages formulated by Melvin Kranzberg, a prominent historian of technology. These laws are intended to provide insights into the interplay between technology and society. Here are the six laws:
- Technology is neither good nor bad; nor is it neutral. – Technology’s impact is shaped by its context and the way it’s used.
- Invention is the mother of necessity. – New technologies create new needs and requirements.
- Technology comes in packages, big and small. – Technology rarely operates in isolation; it often interacts with other technologies and systems.
- Although technology might be a prime element in many public issues, nontechnical factors take precedence in technology-policy decisions. – Social, political, and economic concerns often overshadow technological considerations in policy-making.
- All history is relevant, but the history of technology is the most relevant. – Understanding technological history is crucial to comprehending human history.
- Technology is a very human activity – and so is the history of technology. – Technology is deeply intertwined with human actions, choices, and cultural values.
https://en.wikipedia.org/wiki/Melvin_Kranzberg
AI’s Alignment Problem
The alignment problem with generative AI refers to the challenge of ensuring that AI systems’ goals and behaviors align with human values and intentions. Here are some key points about the alignment problem:
Goal Misalignment: There’s a risk that an AI’s goals may not match those of its human users or creators, leading to unintended consequences.
Complexity of Human Values: Human values are complex and often implicit, making it difficult to codify them into AI systems.
Superintelligence Risks: The problem becomes more pronounced with the development of superintelligent AI systems, which could potentially act in ways that are harmful to humanity if their goals are not aligned with ours.
Technical Challenges: Solving the alignment problem involves both specifying the AI’s purpose clearly (outer alignment) and ensuring the AI adopts this specification robustly (inner alignment).
Safety and Control: The ultimate goal is to develop AI that can be controlled and that behaves in ways that are beneficial to humans.
https://spectrum.ieee.org/the-alignment-problem-openai
Energy Use
Training a LLM
“Training a large language model like GPT-3, for example, is estimated to use just under 1,300 megawatt hours (MWh) of electricity; about as much power as consumed annually by 130 US homes. To put that in context, streaming an hour of Netflix requires around 0.8 kWh (0.0008 MWh) of electricity. That means you’d have to watch 1,625,000 hours to consume the same amount of power it takes to train GPT-3.”
https://www.theverge.com/24066646/ai-electricity-energy-watts-generative-consumption
https://arxiv.org/pdf/2311.16863.pdf
Using an LLM
For a prompt that generates text, the energy needed is about 47 watts-hours; about as much as a 10 watt LED bulb running for 5 hours.
Watt’s in our Query? Decoding the Energy of AI Interactions (substack.com)
UNC Genrative AI Usage Philosophy
- AI should help you think.
- Use these tools to give you ideas, perform research (in compliance with point 2 below), and analyze problems.
- Engage with AI Responsibly and Ethically
- You are 100% responsible for your final product.
- The use of AI must be open and documented.
- These guidelines are in effect unless given specific guidelines for an assignment or exam.
- Data that are confidential or personal should not be entered into generative AI tools.