[ad_1]
Unlike last week (when I’d pointed it out, in a sense of despair), I start our conversation on a cheerful note. Nvidia just extended its superiority in the AI chip space, by quite some margin. Again. Succeeding the H100 AI chip, which played its part in helping the tech giant touch $2 trillion in valuation earlier this year (if you hadn’t noticed, there is a gentle pivot from gaming and graphics focus, to AI processing), is the Nvidia GB200 Grace Blackwell superchip – that’s two Blackwell B200 Tensor Core GPUs and the Grace CPU (complete technicalities here, in case you’d like to read), with a 900GB/s ultra-low-power NVLink chip-to-chip interconnect. The winners?
Nvidia of course. And organisations, which will now use less hardware and less power, to train AI models. According to data, to training a 1.8 trillion parameter model, it would have required as many as 8,000 previous generation Hopper GPUs, totalling power consumption to about 15 megawatts – fast forward to now, the same task can be done with 2,000 Blackwell GPUs, whilst consuming 4 megawatts. A significant step forward, syncing processing power and reduced power consumption, as AI models become more and more complex.
REGULATE
This cannot happen quickly enough, and the good news is, there’s little lethargy to hold it back. Artificial intelligence (AI) regulations saw progress towards the end of last year. The Bletchley Declaration at the UK AI Safety Summit, US President Joe Biden signing an executive order and the G7 agreement on a code of conduct for AI companies (you can ready my piece from the time) made crucial first moves in regulating generative AI, coinciding with its much broader availability. Now, the European Commission has sent in what they call requests for information, to big tech platforms, on how they are reducing generative AI risks.
That’s a subtle way of saying, a grilling on the facts and policies, is overdue. The requests have been sent with the Digital Services Act (DSA) acting as the foundation. Who all have the requests been sent to? The DSA notices categorise these platforms in two distinct segments – the Very Large Online Search Engines, or VLOSEs including Google Search and Microsoft Bing, as well as Very Large Online Platforms, or VLOPs including Facebook, Instagram, Snapchat, TikTok, YouTube, and X. This is what the EU is looking to understand, and if not already, ensure tech platforms have fail-safes in place…
- Mitigation measures for risks linked to generative AI, such as so-called ‘hallucinations’ where AI provides false information.
- There is also the not-so-small matter of deepfakes and how easily they go ‘viral’ on social media platforms with little checks to stall that.
- With 2024 the year of elections in many, many countries, there is obviously concern about how digital platforms may be ripe for manipulation with communication sent to users (who are voters, in the grand scheme of things).
- The notice’s focus is broad based – the European Commission wants information on dissemination of illegal content, protection of fundamental rights, gender-based violence, protection of minors, mental well-being, protection of personal data, consumer protection and intellectual property.
- Whether willingly or to be dragged kicking and scheming, must respond to the European Commission by April 5 with details related to the protection of elections, and by April 26 for remaining queries.
Analysis by consultancy firm Deloitte expects that with the first half of this year, Europe’s regulation on generative AI will become “far clearer”. Their estimation is that when the final version of the EU AI Act (AIA) is confirmed, there may be specific requirements included including foundation models being registered in an EU database, clarity on a model’s lifecycle and testing data on predictability, interpretability, corrigibility, safety, and cybersecurity, as well as data governance standards which will dictate access of quality data alongside assessment of bias.
Days earlier, India has moved to quickly simplify some AI requirements that were announced very recently – there is now no need to get a permit for untested generative AI models, but the focus has now shifted to a need for labelling AI generated content which everyone hopes would make it easier to identify this AI content from actual photos, videos or music, when shared on different platforms.
That’s a question I had pondered over recently. Meta, OpenAI and the Adobe led Coalition for Content Provenance and Authenticity (C2PA) seem bullish on the idea. Yet, these pursuits seem more like island, living in their own isolation. Take for example, the fake images of Taylor Swift that did rounds on many platforms recently X (formerly Twitter) could only control the spread of Taylor Swift’s fake images by completely blocking any search for the phrase “Taylor Swift”, indicates how powerless their own systems were in checking the eventual virality. And we could point the same finger at Telegram or 4chan, either of which are believed to be the source of these fake(d) generations.
NOTICE
Quite a contrast to how Apple usually makes announcements these days. Some researchers at Apple have (rather) quietlypublished a paper that details the work done on what’s thus far called MM1. Think of this as a family of multimodal models. For starters, by sharing MM1’s methodologies even though it is in very early stage, Apple could be contributing significantly to more open AI research. Secondly, by simplify letting the world know it is very much aboard the AI bandwagon, Apple hopes rivals will take notice (and they should, considering the Cupertino giant’s tech, talent and financial might). Interesting things to note at this stage.
The size (MM1’s 30 billion parameters), for instance, may pale in comparison to Claude 3 Opus’ 2 trillion parameters and OpenAI’s GPT-4’s trillion. If you think that’s a disadvantage, it’s not the complete picture. Take video for training, alongside audio, text and images. MM1 can train and generate video, whereas most of its rival models train on text, images and audio, but not always video. It may still be early days, but I do expect Apple to go beyond a deeper and automated embed of AI in iOS (photos, messages, notes and so on) to a more consumer facing approach because that’s in vogue. It needs to be seen in competition with Google, Microsoft, OpenAI and others. Siri’s era may be at an end, and with the summer’s annual WWDC conference over the horizon, expect to hear more on AI models. We’ve just heard of MM1, but my belief is, it surely isn’t the only one.
Yet, if Apple still decides to use Google Gemini or OpenAI’s models in iOS at some stage, it’ll effectively be dropping them out of the AI battles.
PAY?
At this point, I must ask you this question for which I do not at all expect an answer immediately. Take your time, and ponder over it. Experiment if you wish to. But here it is – Do you see yourself paying for a generative AI assistant yet?
I ask this, because generative AI subscription choices are now quite literally calling out to us, begging, pleading, that we part with some money every month, to get faster responses to whatever’s on our mind. And so on. A few days ago, Microsoft unlocked the Copilot Pro subscription for more regions, in case you’d like access to GPT-4 Turbo, build your own Copilots, image generator boosts for Designer or use the assistant with Microsoft’s apps including Word and Outlook (but only with a Microsoft email address). How much does this arsenal of tools you may or may not need cost you every month? That’d be ₹2000, per month.
In line with Google’s Gemini Advanced (that’d be ₹1,950 per month for Google One with Gemini Advanced), or OpenAI’s ChatGPT Plus and Perplexity Pro. All these subscriptions try to make a case for themselves. For Microsoft, the biggest advantage for anyone also using the Microsoft 365 subscription is its deeper integration within the productivity apps. For Google, their pitch to you is Gemini’s integration within Gmail, Docs and Meet, as well as 2TB cloud storage (which we all we know we need; but Google Drive, really?).
Perplexity perhaps has the most interesting pitch, if you are considering an AI assistant subscription purely for the conversation and returning information aspect of it. The Perplexity Pro plan unlocks an option for you to choose between different models – OpenAI’s GPT4 Turbo, Anthropic’s Claude 3 and Mistral’s Le Large too. Unless you want your AI assistant’s subscription to unlock plug-ins in more apps you use, there is a genuinely interesting proposition of having different AI models available at your fingertips. Depends on what you want. Irrespective, don’t just get a subscription because of FOMO (or whatever the cool kids call it these days). For most questions you’ll chuck its way, a free version of Copilot or Gemini or ChatGPT will likely be more than adequately knowledgeable and powerful.
MARKET PLACES
- Google continues to be amidst regulatory scrutiny in India. The Competition Commission of India (CCI) has opened another investigation on the Google Play Store billing and service fee policies. This is likely a follow-through of its recent dispute with some Indian app developers, which saw some significant de-listings (and subsequent relisting on the Play Store). It’ll be interesting to watch this develop, since CCI had after its two year investigation that began in 2020, had found no illegality in Google’s service fee policy. Google also complied with the requirement for alternate billing systems. Some Indian app developers contend that if they use an alternate billing system, Google shouldn’t charge a service fee.
- Meanwhile, PhonePe tells us their Indus Appstore, an alternate to Google Play Store for Android, has already crossed a million downloads. That’s in a month, since its launch. I’ve written earlier as well, the Indus Appstore has seemingly solid foundations that give it a proposition advantage over the Play Store. First, app developers can use any third-party payment gateway for in-app billing and will not be charged a commission if they do so. Second, Indus AppStore will introduce its own in-app billing platform at some point down the line, but for now, they claim it will “remain strictly optional for app developers.” We’ll see how that progresses. Third, they’re waiving off any listing fee for app developers, for the first year.
[ad_2]