The speed of innovation in the AI and creative spaces continues to be dizzying. We felt a tinge of guilt at not providing quicker updates on these developments and on the work we have been doing behind the scenes.
But there was something else going on. Something that we were not sure how to put in writing until recently. So as part of this catch up with our readers and community, let us start with this important topic, which we will lay out as candidly as we can.
We Don’t Like The Hype
Given the incredible levels of hype around AI last year, we now see that it was good to take a step back these past few months and let these developments sink in, before discussing things with a more critical spirit and by focusing on things that are truly making or going to make an impact.
We’d love to say that the evolution of models and products in the past few months would have warranted closer scrutiny but, unfortunately, there are other aspects of AI beyond models and chatbots that seem more important at the moment.
First, we have been very disappointed with the fact that even the leaders at the top AI companies seem to relentlessly both increase the hype around their product and make statements that increase AI alarmism (so called “doomerism”). The former has sometimes taken the form of news “leaks” around AI breakthroughs (e.g. the entire Q* conversation so far). Whether true or not, that months have passed with nothing concrete coming out tells us that these moments are essentially good marketing for the likes of OpenAI.
And people like OpenAI CEO Sam Altman, who survived a coup at the company and is back and more secure in his position than ever, seem to excel at this game of teasing their now vast audiences with the prospect of Artificial General Intelligence (AGI) now behind just behind the corner. So, you know, we all better get onboard and become a customer/investor, not to be left behind.
Big Tech’s Questionable Ethics
We wish that was the worst we have seen from these companies, but it is not. Among other sad developments on the ethics front, in fact, we have seen OpenAI’s decision to work with US military bodies. Another deathly blow to “AI for humanity” if things were not looking dire already from an ethics and diversity point of view in AI.
The reality is that tech companies like to paint themselves as citizens of the world, transcending borders and boundaries. And sure, they sell their iPhones and Xboxes, and now AI software like the Humane AI Pin everywhere. But at the end of the day, they’re still on a leash, tied to the whims of their home governments.
Take Apple, for example. They shout “privacy” from the rooftops, building their whole brand around keeping your data safe and secure. Yet, when Uncle Sam comes knocking, demanding access to that very same data, what can they do? They may grumble and drag their feet, but ultimately, they play ball. After all, who wants to risk the wrath of the government in their biggest market? Lawsuits, lost contracts, bad press—it’s a recipe for disaster that even a trillion-dollar company wants to avoid.
It’s a similar story for all the big players, whether it’s Xiaomi in China or Tata in India. They may spread their wings globally, but there’s always that anchor pulling them back home. It’s a constant dance, balancing the demands of a global audience with the expectations of their home governments.
And you can bet it’s not a dance they enjoy. Imagine trying to please customers in China while also bowing to the demands of the American government. It’s a tightrope walk, and one misstep could send them tumbling. But that’s the reality of the world. Governments still hold the cards, and even the biggest tech giants have to play by their rules. AI companies are no different.
But enough of geopolitics for today! Paolo (who is to blame for all the geopolitics talk) wants to share his update on his latest projects around prompt engineering and AI use cases.
Paolo’s journey
I will write in first person here to share a bit more about what my work has actually been like over the first quarter of this year. It has been, in one word, confusing. We have written about the difficulties post-shutdown of Storya and this has in some ways gotten only partially easier. I have been involved in prompt engineering projects for clients in the publishing space and this has resulted in some interesting opportunities to test out our AI skills in the “real world”.
That being said, my overall feeling is that most companies approaching AI still struggle to take a deep dive into how the applications of generative AI can help their businesses. There is a combination of legal and employee concerns about AI replacement and compliance that is preventing a lot of experimentation going on in official capacities.
I say “official” because, as many surveys and studies are finding, employees and managers are already secretly using chatbots and AI tools across their tasks for professional purposes, often without notifying their bosses in light of the concerns above.
That being said, I thought I’d share some basic prompt engineering learnings, something we have touched on previously without going into sufficient details. So, what are my key learnings in this area?
Use the best models only: see above for recommendations, and stick to the key providers without getting scammed by the many services providing what are essentially prettier interfaces for less capable models.
ALWAYS break down the tasks by asking the model to PLAN its response by “THINKING STEP BY STEP”.
Anytime it is possible, provide SAMPLES: whether it is your own work or a work you admire, help the AI focus on the task at hand by giving it two or three examples of good work you want to inspire yours.
Roleplay as much as possible: whenever appropriate to the task, ask the AI to “Act as ABC” role. This is another effective way to get the AI to focus on the task at hand. Even better when this is combined with the samples just mentioned.
Each of these techniques comes with its own fancy name (“few-shot”, “chain-of-thought”, “personas”), but really the summary above gives enough context to get better results. To improve upon these simple frameworks, the magic of prompt engineering is really in iteration, which allows one to build longer and more complex “master” prompts for specific case studies. If you are keen to see some examples, of course, let us know in the comments and we can cover those in future newsletters.
Looking ahead, I think that, while prompt engineering remains an interesting new opportunity for consulting work, it may struggle to take off under this exact guise. Perhaps new service providers will emerge packaging the idea of prompt engineering as something different and more acceptable for companies? Time will tell.
For now, I continue to split my time between client and startup projects, my creative work (focusing on my second novel, a more mature sequel to my first epic fantasy novel in a planned trilogy, Path of the Guardian), continuing my daily research into AI developments, and exploring full-time roles in my current base of Singapore with a focus on media and publishing.
Thanks for reading us so far. Expect to see another update soon on the open-source VS proprietary AI model rising competitions and what to expect, as well as a personal update from Praveen, who will also share another preview of a new service we are testing (more details soon!).
TO BE CONTINUED…