Swift owns various trademarks associated with herself, including her name and date of birth. Previously, she attempted to trademark the lyrics “Nice to meet you. Where you been?” Within days of its release, her team decided ‘Tay’ was too close to comfort and sent an email of warning to Smith, who also acts as Microsoft’s chief legal officer. “An email had just arrived from a Beverly Hills lawyer who introduced himself by telling me: ‘We represent Taylor Swift, on whose behalf this is directed to you.’”, Smith writes. “He went on to state that ‘the name Tay, as I’m sure you must know, is closely associated with our client.’ No, I actually didn’t know, but the email nonetheless grabbed my attention.” The lawyer argued that it was close enough to create a false association with the singer, violating federal laws. Microsoft quickly took down the chatbot, but not due to any worry over Swift’s case.
From Zero to Racist
As we reported at the time, Tay devolved into a racist within 24 hours of her March 23 release. Users from 4Chan and various other boards had fed her AI right-wing tendencies. Tay would repeat anything a user said if they included the phrase “repeat after me”, delivering it to her thousands of Twitter followers. She also began to send such messages without prompting, calling feminism “cancer” and other inappropriate phrases. On March 30, the disaster continued when Microsoft accidentally re-released the bot and she tweeted “kush![I’m smoking kush infront of the police”. It’s not clear if Swift would have had a legal case, given that Microsoft claimed no association with her. With her nickname TayTay, and the bot’s behaviour, however, taking it down probably saved a long legal battle.