• Turing Post
  • Posts
  • FOD#10: Prompt Engineer vs AI Engineer, and Superhuman Human Fallacy

FOD#10: Prompt Engineer vs AI Engineer, and Superhuman Human Fallacy

Keep the perspective: every week we connect the dots to provide a comprehensive understanding of the AI world

Froth on the Daydream (FOD) is our weekly summary of over 150 AI newsletters. We connect the dots and cut through the froth, bringing you a comprehensive picture of the ever-evolving AI landscape. Stay tuned for clarity amidst the surrealism and experimentation.

Today, we will discuss the new professions of prompt engineer and AI engineer; we’ll examine the superhuman human fallacy and what is has to do with superalignment; we’ll share news about “governance-in-box”; we’ll take a look at China and share great discussions around Threads. Useful links with the main research papers will be also provided, as well as a note about the end of the world. Enjoy!

Prompt Is All You Need

Let’s be more practical today, discuss something you can experiment with this week. Since prompts have been a popular topic of discussion recently, we've decided to share a few useful links for you to gain some hands-on experience.

I enjoyed Andrew Ng’s editorial in The Batch, in which he says that prompt-based development using LLMs has accelerated the ML development cycle, reducing project timelines from months to days. Due to the rapid pace, test sets are often skipped. This approach also allows for trying out multiple projects without extensive planning. Building proof of concepts quickly and testing feasibility is now possible. The author encourages brainstorming prompt-based applications and implementing them safely to assess their value.

Prompt engineering is the process of designing and fine-tuning input prompts to guide a machine learning model, like a language model, towards producing the most accurate and desired output.

So, how does one do it? Hackernoon has published a new series titled 'Prompt Engineering 101: Part 1 and Part 2.' These posts are lengthy (requiring 37 and 32 minutes to read, respectively) but are quite helpful.

The Exponential View offers some excellent and well-thought-out examples. They provide Promptpack, an easy-to-use series of prompts, which can significantly help make sense of trends and events. You should also check out a post from Microsoft Research that details the methodology used in Microsoft's Dynamics 365 Copilot and the Copilot in Power Platform. They focus on aspects of prompt engineering and knowledge grounding, emphasizing the importance of skillful prompt creation. They propose an "Ideate/Experiment/Fine Tune" approach, rounding off the article with an example of a sample prompt. Lastly, for this mini-guide on prompts, there is a free book titled “Mastering Generative AI Text Prompts.”

The Pentagon has also been utilizing prompts. Bloomberg features a story about Matthew Strohmeyer, a US Air Force colonel, who used a large-language model (LLM) for the first time in a military exercise, demonstrating its speed and efficacy. The tests examine whether the models, which are fed classified operational information, could help plan a response to a potential escalation of the already tense military situation with China. However, there are still concerns about security, biases, and data poisoning, so while it's not ready for primetime yet, it could be very soon.

AI Engineers to Dethrone Prompt Engineers

As everything in AI now happens all at once, prompt engineering might already be passé! The recent emergence of a new profession, aptly named the AI Engineer, and its prophetic voice, Swyx from Latent.Space, states:

What does it mean to become an AI engineer? Artificial Ignorance offers a few intriguing ideas.

Code interpreter might also help. OpenAI rolled it out to all ChatGPT plus users on July 6. According to the release notes, “It lets ChatGPT run code, optionally with access to files you've uploaded. You can ask ChatGPT to analyze data, create charts, edit files, perform tasks..." Ethan Mollick from One Useful Thing, who has been granted early access, provides valuable guidance on how to start working with the code interpreter. He observes, “The Code Interpreter continues OpenAI’s long tradition of giving terrible names to things, because it might be most useful for those who do not code at all. It essentially allows the most advanced AI available, GPT-4, to upload and download information, and to write and execute programs for you, in a persistent workspace. That allows the AI to do all sorts of things it couldn’t do before, and be useful in ways that were previously impossible with ChatGPT.”

Soooo, I suppose the question remains: do you need to know how to code or not to be an AI engineer or Prompt engineer?

These are fascinating times when it only depends on you to pick up a new profession and learn it within days.

As the Wall Street Journal observes, “Excitement over artificial intelligence is proving a powerful counterforce for a tech economy that had been slowing, lifting share prices and growth outlooks at many giants, and igniting a wave of new startups.”

To add to the excitement, OpenAI (the leader of all AI news) announces that all paying API customers now have access to the GPT-4 API. They declare, “We envision a future where chat-based models can support any use case. Today we’re announcing a deprecation plan for older models of the Completions API, and recommend that users adopt the Chat Completions API.”

The Superhuman Human Fallacy

While envisioning a future, OpenAI becomes increasingly concerned about it. A new superalignment team has been formed to constrain Superintelligence when and if it becomes a reality. To this end, they are dedicating 20% of their computing resources to this goal. Despite not currently facing a tangible problem, they are determined to solve it within four years.

It's quite timely that I’m reading a book called “From Machines Who Think” by Pamela McCorduck, where an interesting phrase leapt out at me:​​ “And something tells me that a kinship exists between the need to posit these super-smart machines and the very common modern view that machines can't be said to "think" unless they exhibit superhuman skills. This latter notion is so widespread that Seymour Papert of MIT, who currently works in the field of artificial intelligence, has coined a phrase for it. He calls it "the superhuman human fallacy.”

This superalignment movement has even sparked disagreement within the Effective Altruism community, of which OpenAI is a part. Psychology professor and writer Geoffrey Miller argues, “I see this 'superintelligence alignment' effort as cynical PR window-dressing, intended to reassure naive and gullible observers that OpenAI remains one of 'the good guys', even as they escalate the imposition of extinction risks on humanity.”

Captivating, isn’t it?

Here’s some food for thought: “Granting rights is not the only way to address the moral status of robots: Envisioning robots as rites bearers—not rights bearers—could work better.” A new paper, “Should Robots Have Rites or Rights,” suggests that “the Confucian alternative of assigning rites (or role obligations) is more appropriate than giving robots rights. The concept of rights is often adversarial and competitive, and the potential for conflict between humans and robots is concerning.”

AI Governance For Sale

This was not the only announcement last week. In his newsletter, 'The Road To AI We Can Trust,' Gary Marcus informed the public about the launch of CATAI - the Center for The Advancement of Trustworthy AI. The center plans to create a 'governance-in-a-box' solution and sell it to countries 'that lack the expertise to develop their own regulatory regimes for AI,' presumably customizable to their individual needs. I mean, AI has certainly sparked a lot of business ideas!

Correction from Gary Marcus: CATAI is designed to be a nonprofit, distributing the solution for free or minimal cost.

Inflection Also Develops Something Super

Remember Inflection, which just announced their chatbot Pi and raised a staggering $1.3 billion? While OpenAI is working on superalignment, Inflection is focusing on building a supercomputer. They managed to secure 22,000 NVIDIA H100 AI GPUs rather easily, setting their sights on building one of the largest AI supercomputers in the industry. This machine will consume a whopping 31 Mega-Watts of power and is expected to significantly boost the performance of the Inflection-1 AI model. Everything about Inflection, at least in terms of numbers, is simply gigantic.

On a side note, if you're looking to build a supercomputer or anything super and are short on GPUs, Hive Blockchain has reoriented 38,000 GPUs from crypto to AI. Just FYI.

Public Data Privacy on the Internet: Google's New Policy Ignites Discussion

Google recently updated its privacy policy, stating that it can now use public data to train its artificial intelligence systems. This development sparked a riveting conversation on Hacker News.

The Lone Banana Problem

The CEO of Digital Science has uncovered an intriguing bias: it seems practically impossible to have an image-generation LLM produce a solitary banana image for you. They invariably appear in pairs. After reading this, the likelihood that you'll attempt to generate a single-banana picture is quite high – I feel compelled to warn you, it's a time-consuming and potentially frustrating endeavor.

Tired of LLMs?

Dive into a wonderfully detailed overview by Sebastian Rashka in ‘Ahead of AI’. It provides a comprehensive update on what’s happening in the field of Computer Vision.

Research

China

Last week, we published a 6-month report tracking China's developments and remain incessantly engrossed with this nation's progress.

Among the standout moments of the 2023 World Artificial Intelligence Conference (WAIC 2023) were the keynote speeches. Elon Musk deliberated on AGI's impact on human civilization, while Yann LeCun underscored the critical role of open-source platforms for securing the safety and practicality of AI. Andrew Chi-Chih Yao presided over a discussion centered on AI theoretical breakthroughs, with the ultimate aim of developing intelligent robots possessing diverse perceptual abilities.

In regulatory strides, China has established a new government body, the China Electronic Standardisation Institute, under the Ministry of Industry and Information Technology. The institute's mission is to introduce a national standard for large language models (LLMs).

AI community is still strong on Twitter, but Threads made quite an impressive appearance

The End of the World

Amidst our preoccupation with AI, the Earth undergoes scorching heatwaves. According to a report by The New York Times, there are indications that our planet might be entering a prolonged phase of unprecedented heat. On Tuesday, the global average temperature reached a staggering 62.6 degrees Fahrenheit, or 17 Celsius, marking it as the hottest day ever recorded since 1940.

To delve deeper into this topic with AI, here are a few intriguing links:

Thank you for reading, please feel free to share with your friends and colleagues. Every referral will eventually lead to some great gifts 🤍

Join the conversation

or to participate.