Category Archives: AI

Where’s the Bot?



Wendy’s is automating its drive-through service using an artificial-intelligence chatbot powered by natural language software developed by Google and trained to understand the myriad ways customers order off the menu, The Wall Street Journal reported.

The Dublin, Ohio-based fast-food chain’s chatbot will be officially rolled out in June at a company-owned restaurant in Columbus, Ohio, Wendy’s said. The goal is to streamline the ordering process and prevent long lines in the drive-through lanes from turning customers away, said Wendy’s Chief Executive Todd Penegor.

According to the Wall Street Journal, Wendy’s didn’t disclose the cost of the initiative beyond saying the company has been working with Google in areas like data analytics, machine learning and cloud tools since 2021.

“It will be very controversial,” Mr. Penegor said about the new artificial intelligence-powered chatbots. “You won’t know you’re talking to anybody but an employee,” he said.

To do that, Wendy’s software engineers have been working with Google to build and fine-tune a generative AI application on top of Google’s own large language model, or LLM – a vast algorithmic software tool loaded with words, phrases and popular expressions in different dialects and accents and designed to recognize and mimic the syntax and semantics of human speech.

Gizmodo reported: AI chatbots have come for journalism, and now they are coming for our burgers. Wendy’s is reportedly gearing up to unveil a chatbot-powered drive-thru experience next month, with the help from a partnership with Google.

“Google Cloud’s generative AI technology creates a huge opportunity for us to deliver a truly differentiated, faster and frictionless experience for our customers, and allows our employees to continue focusing on making great food and building relationships with fans that keep them coming back time and again,” said Wendy’s CEO Todd Penegor in a statement emailed to Gizmodo.

According to Gizmodo, Wendy’s competitor McDonald’s has already been experimenting with ah AI drive-thru – to mixed results. Videos posted to TikTok illustrated just how woefully ill-prepared automation is at taking fast food orders, and how woefully un-prepared humans are to deal with it.

McDonalds began testing AI drive-thrus as June 2021 with 10 locations in Chicago. McDonald’s CEO Chris Kempczinski reportedly explained that the AI system had an 85% order accuracy. However, according to Restaurant Drive in June 2022, the company was seen an accuracy percentage in the low-80s when it was really hoping for 95% accuracy before a wider rollout.

The Register posted a title for its article that started with “Show us the sauce code…” also reported that Wendy’s and Google have together built a chatbot for taking drive-thru orders, using large language models and generative AI

According to The Register, the system works by converting spoken fast-food orders to text that can be processed by Google’s large language model. A generative component added to the system is designed to make the chatbot interact with people in a more natural and conversational manner, so that it’s less rigid and robotic.

The completed model was trained to recognize specific phrases or acronyms customers typically use when ordering, such as “JBC” describing Wendy’s junior bacon cheeseburger, “Frosties” milkshakes, or its combination meal “biggie bags.” Unsurprisingly, The Register reported, the chatbot, like human workers, will gladly offer to upsize meals or add more items to an order since it has been programmed to try and persuade hungry patrons to spend more cash.

The Register also reported that Wendy’s will try out its AI-powered drive-thru service in June at a restaurant in Columbus, Ohio. Up to 80 percent of orders are reportedly placed by customers at the burger slinger’s drive-thru lanes, an increase of 30 percent since the COVID-19 pandemic.

Personally, I’m of two minds about this. On the one hand, if the AI turns out to be really good at what it does, it could make the drive thru lines move faster. People don’t have to wait as long, and Wendy’s gets more money.

On the other hand, I have concerns that if the AI – eventually – will be used in every Wendy’s. That could result in less job opportunities for real-life, human, workers.


Embracing AI: Amazement Meets Anxiety



Over the past several weeks, I have been exploring quite heavily and using ChatGPT to develop marketing ideas, write customer surveys and even help brainstorm new product ideas. The results have been astounding and enlightening.

One task I completed saved me at least 8 hours of work, by creating a 48 questions survey based on some external data in just under 30 minutes. So it has made me wonder how the economics of this will play out.

I feel we stand at the dawn of a new era in technology. The rapid advancements in artificial intelligence (AI) are at an incredible pace, and I can only imagine what the landscape will look like in 2-3 years. The transformative potential of AI tools like ChatGPT is undeniable.

These tools have shown remarkable capabilities in tasks ranging from content creation to generating marketing ideas or even just creating a marketing survey, making my life easier in countless ways.

AI will cause a change in the labor market. AI cannot be ignored, and those who do will be left behind.

I will explore the reasons for excitement and apprehension while offering insights on balancing AI innovation and preserving human livelihoods.

My amazement surrounding AI arises from its potential to revolutionize virtually every aspect of our lives if you apply thought to it. ChatGPT, a prime example, has demonstrated unprecedented prowess in natural language processing (NLP), understanding complex language structures, and generating coherent, contextually relevant responses. It does not criticize my vocabulary either.

As a result, it can assist with tasks like customer support, content generation, and even complex problem-solving in a way that was once considered the exclusive domain of humans.

The fear, however, stems from the potential consequences of AI replacing human labor. Many jobs, especially those involving repetitive tasks or data analysis, could be rendered obsolete by AI systems that can perform these tasks with unparalleled efficiency and accuracy.

This shift will lead to widespread change in employment as people struggle to adapt to an economy where their skills are no longer in demand.

It’s essential to recognize that AI is not a zero-sum game. History has shown that introducing new technologies often creates new opportunities and jobs, even as it displaces some existing roles. Just as the disruptive business caused significant changes, this will force companies to shift and adopt.

Society needs to develop a proactive approach. With protective legislation as well as investing in education and retraining programs that prepare workers for the new opportunities AI will create businesses, and educational institutions must work together to ensure a smooth transition for workers whose jobs are at risk, providing them with the skills and resources needed to thrive in an AI-driven world.

There are many aspects of human intelligence that AI cannot replicate today, such as empathy, creativity, and interpersonal skills, but that could change going forward as the digital world does not have a lot of heart built in.

One thing is for sure, I feel as if I have gained a personal assistant in that I can tell it what I want, and thus far, it has produced the desired result in making a mundane task that would have been a significant time sink and allowed me to recover that time and move on to other projects.

Photo by Jan Antonin Kolar on Unsplash


Discord Restored Its Privacy Policies After Pushback



TechRadar posted an update from Discord in which the company backtracks about its previously announced changes. From the update:

UPDATE: Discord has updated the Privacy Policy that will take effect on March 27, 2023, adding back the statements that were removed and adding the following statement: “We may build features that help users engage with voice and video content, like create or send short recordings.”

A Discord spokesperson contacted TechRadar to provide the following statement: “Discord is committed to protecting the privacy and data of our users. There has not been a change in Discord’s position on how we store or record the contents of video or voice channels. We recognize that when we recently issued adjusted language in our Privacy Policy, we inadvertently caused confusion among our users. To be clear, nothing has changed and we have reinserted the language back into our Privacy Policy, along with some additional clarifying information.”

“The recently announced AI features use OpenAI technology. That said, OpenAI may not use Discord user data to train its general models. Like other Discord products, these features can only store and use information as described in our Privacy Policy, and they do not record, store, or use any voice or video call content from users.”

“We respect the intellectual property of others, and expect everyone who uses Discord to do the same. We have a thorough Copyright and Intellectual Property policy, and we take these concerns seriously.”

In addition TechRadar reported, the spokesperson asserts that if Discord’s policy “ever changes, we will disclose that to our users in advance of any implementation.”

Previously, Discord appeared to have updated some of the information in their “Information you provide to us” section. Originally, a portion of the “Content you create” section said: (in part) “We generally do not store the contents of video of voice calls or channels. If we were to change that in the future (for example, to facilitate content moderation), we would disclose that to you in advance. We also don’t store streaming content when you share your screen, but we do retain the thumbnail cover image for a short period of time.”

Sometime later, Discord changed the “Content you create” section to: “This includes any content that you upload to the service. For example, you may write messages or posts (including drafts), send voice messages, create custom emojis, create short recordings of GoLive activity, or upload and share files through the services. This also includes your profile information and the information you provide when you create servers.”

It was that change that caused many people to have concerns that their content would be used by Discord’s AI bots. I honestly considered removing my art from Discord. It is good that Discord clarified things a little bit – for example, stating that “OpenAI may not use Discord user data to train its general models.”

That said, when a company pulls shenanigans like Discord did – I find it difficult to trust them with my artwork. If you feel that way as well, one thing you can do is get on Discord and look for “Privacy & Safety”. It opens to a section where you can turn off Discord’s ability to use your data, and to track screen reader usage.


Discord Quietly Removed Privacy Policies – Then Added Bad Ones



Last week, Discord announced new AI features powered by Midjourney’s image generator and chatbot technology powered by OpenAI, the makers of ChatGPT. The company’s existing chatbot, named Clyde, is now super-charged with artificially intelligent language parsing capabilities and there are other fun features.

Those features appeared to come at a cost: in the fine print of the company’s privacy policy, Discord made subtle changes that disturbed users. It revoked promises not to collect data about screen recording and voice and video chats. One day after getting called out, though, Discord undid those changes, Gizmodo reported.

TechRadar reported that a Discord spokesperson contacted TechRadar to provide the following statement: “Discord is committed to protecting the privacy and data of our users. There has not been a change in Discord’s position on how we store or record the contents of video or voice channels. We recognize that when we recently adjusted language in our Privacy Policy, we inadvertently caused confusion among our users. To be clear, nothing has changed and we have reinserted the language back into our Privacy Policy, along with some additional clarifying information.”

Discord continued: “The recently-announced AI features use OpenAI technology. That said, OpenAI may not use Discord user data to train its general models. Like other Discord products, these features can only store and use information as described in our Privacy Policy, and they do not record, store, or use any voice or video content from users.”

According to TechRadar, the biggest issue with this AI integration is the fact that it comes bundled with very deliberate changes to Discord’s privacy policy. The previous privacy policy, which is still in effect until March 26, 2023, had two important statements under the “The information we collect” section.

The first states that “We generally do not store the contents of video or voice calls or channels” and the second is “We also don’t store streaming content when you share your screen”

But, TechRadar reported, when you check the new privacy policy, which is set to take effect on March 27, 2023, both those statements as well as the one claiming that “if we were to change that in the future (for example, to facilitate content moderation), we would disclose that to you in advance,” are now completely wiped.

Discord appears to have changed it to the following: “Content you create: This includes any content you upload to the service. For example, maybe you write messages or posts (including drafts), send voice messages, create custom emojis, create short recordings of GoLive activity, or upload and share files through the services. This also includes your profile information and the information you provide when you create servers.”

In addition, TechRadar reported that it could be possible for Discord to let its AI bots engage in rampant art theft by stealing the art creators have already posted on Discord. This, alone, makes me want to remove all the art I’ve posted there.


HireVue Uses a Face-Scanning Algorithm to Decide Who to Hire



It has been said that the robots are coming to take your job. Not as much has been said about artificial intelligence being used to sort through job applicants and determine who to hire. A recruiting-technology firm called HireVue does just that. Your next job interview might require you to impress an algorithm instead of an actual human.

The Washington Post has a lengthy, detailed, article about HireVue and the ethical implications of its use. According to the Washington Post, more than 100 employers now use the HireVue system, including Hilton, Unilever, and Goldman Sachs, and more than a million job seekers have been analyzed. The use of HireVue has become so pervasive in the hospitality and finance industries that universities are training students on how to look and speak for the best results.

But some AI researchers argue the system is digital snake oil – an unfounded blend of superficial measurements and arbitrary number-crunching that is not rooted in scientific fact. Analyzing a human being like this, they argue, could end up penalizing nonnative speakers, visibly nervous interviewees, or anyone else who doesn’t fit the model for look and speech.

According to The Washington Post, the AI in HireVue’s system records a job candidate and analyzes their responses to questions created by the employer. The AI focuses on the candidate’s face moves to determine how excited someone feels about a certain work task or to see how they would handle angry customers. Those “Facial Action Units” can make up 29 percent of a person’s score. The words they say and “audio features” of their voice make up the rest.

This situation makes me think of ImageNet Roulette, an AI that was trained on the ImageNet database. People posted selfies to ImageNet Roulette, and the AI gave them problematic classifications. You may have seen people sharing their selfies on social media, and noticed the racist, misogynistic, and cruel labels that the AI added.

The purpose of ImageNet Roulette was to make it abundantly clear that AI can be biased (and cruel) if it was using a dataset that included very negative classifications of people. From this, it seems to me that it is entirely possible that hiring decisions made by AI such as HireVue could be very biased for or against certain types of people. I would like to see some research done to determine who the HireVue AI favors – and who it is intentionally excluding.


ImageNet Roulette Reveals that AI Can Be Biased



Have you heard of ImageNet Roulette? Chances are, some of the people you follow in social media have tried it out. In some cases, ImageNet Roulette produces some controversial, and cruel, results. This is a feature, not a bug!

ImageNet Roulette was launched earlier this year as part of a broader project to draw attention to the things that can – and regularly do – go wrong when artificial intelligence models are trained on problematic training data.

The ImageNet Roulette website provides further explanation. ImageNet Roulette is trained on the “person” categories from a dataset called ImageNet, which was developed at Princeton and Stanford Universities in 2009. It is one of the most widely used training sets in machine learning and research development.

The AI that is trained on the ImageNet dataset will base its responses to each selfie on the information in that dataset. This posed a dilemma for the researchers who released ImageNet Roulette.

One of the things we struggled with was that if we wanted to show how problematic these ImageNet classes are, it meant showing all the offensive and stereotypical terms they contain. We object deeply to these classifications, yet we think it is important that they are seen, rather than ignored or tacitly accepted. Our hope was that we could spark in others the same sense of shock and dismay that we felt as we studied ImageNet and other benchmark datasets over the last two years.

A warning appears near the top of the ImageNet website: ImageNet Roulette regularly returns racist, misogynistic and cruel results. It points out that this is because of the underlying data set it is drawing on, which is ImageNet’s “Person” categories. ImageNet is one of the most influential training sets in AI. ImageNet Roulette is a tool designed to show some of the underlying problems with how AI is classifying people.

If you put a selfie on ImageNet Roulette, and received racist, misogynistic, or cruel results, you may have felt hurt or offended. This is because the AI was basing its responses on information from a dataset that included very negative classifications of people. It seems to me that the point of ImageNet Roulette was to emphasize that AI cannot be unbiased if the data it has to work with is biased. What better way to make that clear than by letting people post their results to social media?

The ImageNet Roulette project has officially achieved its aims. It will no longer be available online after September 27, 2019. It will, however, remain in circulation as a physical art installation (which currently is on view at the Fondazione Prada Osservertario in Milan until February 2020).


Microsoft Invests in and Partners with OpenAI



Microsoft has formed a multiyear partnership with OpenAI. Microsoft has invested $1 billion and will focus on building a platform that OpenAI will use to create new AI technologies.

Microsoft Corp., and OpenAI, two companies thinking deeply about the role of AI in the world and how to build secure, trustworthy and ethical AI to serve the public, have partnered to further extend Microsoft Azure’s capabilities in large-scale AI systems. Through this partnership, the companies will accelerate breakthroughs in AI and power OpenAI’s efforts to create artificial general intelligence (AGI). The resulting enhancements to the Azure platform will also help developers build the next generation of AI applications.

The partnership covers the following:

  • Microsoft and OpenAI will jointly build new Azure AI supercomputing technologies
  • OpenAI will port its services to run on Microsoft Azure, which it will use to create new AI technologies and deliver on the promise of artificial general intelligence
  • Microsoft will become OpenAI’s preferred partner for commercializing new AI technologies

The press release states that Microsoft and OpenAI will build a computational platform in Azure which will train and run increasingly advanced AI models, include hardware technologies that build on Microsoft’s supercomputing technology, and adhere to the two companies’ shared principals on ethics and trust. Their intent appears to be to create the foundation of advancements in AI to be implemented in a safe, secure and trustworthy way.

OpenAI states that they and Microsoft have a mission to ensure that artificial general intelligence (AGI) benefits all of humanity. This requires ensuring that AGI is deployed safely and securely; that society is well-prepared for its implications; and that its economic upside is shared.

I’m willing to believe that OpenAI and Microsoft are being honest in their motivations. My concern is that they may be unable to prevent the problem of having biased data unintentionally seeping into their AGI. I’m very curious to see precisely how the economic upside of their AGI is shared and who it is shared with.