Artificial intelligence Archives - Chgogs News https://chgogs.org/tag/artificial-intelligence/ Trending News Updates Wed, 16 Oct 2024 01:26:33 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 How the tiny Caribbean island of Anguilla has turned the AI boom into a digital gold mine https://chgogs.org/how-the-tiny-caribbean-island-of-anguilla-has-turned-the-ai-boom-into-a-digital-gold-mine/ https://chgogs.org/how-the-tiny-caribbean-island-of-anguilla-has-turned-the-ai-boom-into-a-digital-gold-mine/#respond Wed, 16 Oct 2024 01:26:33 +0000 https://chgogs.org/how-the-tiny-caribbean-island-of-anguilla-has-turned-the-ai-boom-into-a-digital-gold-mine/ The artificial intelligence boom has benefited chatbot makers, computer scientists and Nvidia investors. It’s also...

The post How the tiny Caribbean island of Anguilla has turned the AI boom into a digital gold mine appeared first on Chgogs News.

]]>

The artificial intelligence boom has benefited chatbot makers, computer scientists and Nvidia investors. It’s also providing an unusual windfall for Anguilla, a tiny island in the Caribbean.

ChatGPT’s debut nearly two years ago heralded the dawn of the AI age and kicked off a digital gold rush as companies scrambled to stake their own claims by acquiring websites that end in .ai.

That’s where Anguilla comes in. The British territory was allotted control of the .ai internet address in the 1990s. It was one of hundreds of obscure top-level domains assigned to individual countries and territories based on their names. While the domains are supposed to indicate a website has a link to a particular region or language, it’s not always a requirement.

Google uses google.ai to showcase its artificial intelligence services while Elon Musk uses x.ai as the homepage for his Grok AI chatbot. Startups like AI search engine Perplexity have also snapped up .ai web addresses, redirecting users from the .com version.

Anguilla’s earnings from web domain registration fees quadrupled last year to $32 million, fueled by the surging interest in AI. The income now accounts for about 20% of Anguilla’s total government revenue. Before the AI boom, it hovered at around 5%.

Anguilla’s government, which uses the gov.ai home page, collects a fee every time an .ai web address is renewed. The territory signed a deal Tuesday with a U.S. company to manage the domains amid explosive demand but the fees aren’t expected to change. It also gets paid when new addresses are registered and expired ones are sold off. Some sites have fetched tens of thousands of dollars.

The money directly boosts the economy of Anguilla, which is just 35 square miles (91 square kilometers) and has a population of about 16,000. Blessed with coral reefs, clear waters and palm-fringed white sand beaches, the island is a haven for uber-wealthy tourists. Still, many residents are underprivileged and tourism has been battered by the pandemic and, before that, a powerful hurricane.

Anguilla doesn’t have its own AI industry though Premier Ellis Webster hopes that one day it will become an hub for the technology. He said it was just luck that it was Anguilla, and not nearby Antigua, that was assigned the .ai domain in 1995 because both places had those letters in their names.

Webster said the money takes the pressure off government finances and helps fund key projects, but cautioned that “we can’t rely on it solely.”

“You can’t predict how long this is going to last,” Webster said in an interview with the AP. “And so I don’t want to have our economy and our country and all our programs just based on this. And then all of a sudden there’s a new fad comes up in the next year or two, and then we are left now having to make significant expenditure cuts, removing programs.”

To help keep up with the explosive growth in domain registrations, Anguilla said Tuesday it’s signing a deal with a U.S.-based domain management company, Identity Digital, to help manage the effort. They said the agreement will mean more revenue for the government while improving the resilience and security of the web addresses.

Identity Digital, which also manages Australia’s .au domain, expects to migrate all .ai domain services to its systems by the start of next year, Identity Digital Chief Strategy Officer Ram Mohan said in an interview.

A local software entrepreneur had previously helped Anguilla set up its registry system decades earlier.

There are now more than 533,000 .ai web domains, an increase of more than 10-fold since 2018. The International Monetary Fund said in a May report that the earnings will help diversify the economy, “thus making it more resilient to external shocks.

Webster expects domain-related revenues to rise further, and could even double this year from last year’s $32 million.

He said the money will finance the airport’s expansion, free medical care for senior citizens and completion of a vocational technology training center at Anguilla’s high school.

The income also provides “budget support” for other projects the government is eyeing, such as a national development fund it could quickly tap for hurricane recovery efforts. The island normally relies on assistance from its administrative power, Britain, which comes with conditions, Webster said.

Mohan said working with Identity Digital will also defend against cyber crooks trying to take advantage of the hype around artificial intelligence.

He cited the example of Tokelau, an island in the Pacific Ocean, whose .tk addresses became notoriously associated with spam and phishing after outsourcing its registry services.

“We worry about bad actors taking something, sticking a .ai to it, and then making it sound like they are much bigger or much better than what they really are,” Mohan said, adding that the company’s technology will quickly take down shady sites.

Another benefit is .AI websites will no longer need to connect to the government’s digital infrastructure through a single internet cable to the island, which leaves them vulnerable to digital bottlenecks or physical disruptions.

Now they’ll use the company’s servers distributed globally, which means it will be faster to access them because they’ll be closer to users.

“It goes from milliseconds to microseconds,” Mohan said.



Source link

The post How the tiny Caribbean island of Anguilla has turned the AI boom into a digital gold mine appeared first on Chgogs News.

]]>
https://chgogs.org/how-the-tiny-caribbean-island-of-anguilla-has-turned-the-ai-boom-into-a-digital-gold-mine/feed/ 0 1755
Apple Engineers Show How Flimsy AI ‘Reasoning’ Can Be https://chgogs.org/apple-engineers-show-how-flimsy-ai-reasoning-can-be/ https://chgogs.org/apple-engineers-show-how-flimsy-ai-reasoning-can-be/#respond Tue, 15 Oct 2024 23:55:27 +0000 https://chgogs.org/apple-engineers-show-how-flimsy-ai-reasoning-can-be/ For a while now, companies like OpenAI and Google have been touting advanced “reasoning” capabilities...

The post Apple Engineers Show How Flimsy AI ‘Reasoning’ Can Be appeared first on Chgogs News.

]]>

For a while now, companies like OpenAI and Google have been touting advanced “reasoning” capabilities as the next big step in their latest artificial intelligence models. Now, though, a new study from six Apple engineers shows that the mathematical “reasoning” displayed by advanced large language models can be extremely brittle and unreliable in the face of seemingly trivial changes to common benchmark problems.

The fragility highlighted in these new results helps support previous research suggesting that LLMs’ use of probabilistic pattern matching is missing the formal understanding of underlying concepts needed for truly reliable mathematical reasoning capabilities. “Current LLMs are not capable of genuine logical reasoning,” the researchers hypothesize based on these results. “Instead, they attempt to replicate the reasoning steps observed in their training data.”

Mix It Up

In “GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models”—currently available as a preprint paper—the six Apple researchers start with GSM8K’s standardized set of more than 8,000 grade-school level mathematical word problems, which is often used as a benchmark for modern LLMs’ complex reasoning capabilities. They then take the novel approach of modifying a portion of that testing set to dynamically replace certain names and numbers with new values—so a question about Sophie getting 31 building blocks for her nephew in GSM8K could become a question about Bill getting 19 building blocks for his brother in the new GSM-Symbolic evaluation.

This approach helps avoid any potential “data contamination” that can result from the static GSM8K questions being fed directly into an AI model’s training data. At the same time, these incidental changes don’t alter the actual difficulty of the inherent mathematical reasoning at all, meaning models should theoretically perform just as well when tested on GSM-Symbolic as GSM8K.

Instead, when the researchers tested more than 20 state-of-the-art LLMs on GSM-Symbolic, they found average accuracy reduced across the board compared to GSM8K, with performance drops between 0.3 percent and 9.2 percent, depending on the model. The results also showed high variance across 50 separate runs of GSM-Symbolic with different names and values. Gaps of up to 15 percent accuracy between the best and worst runs were common within a single model and, for some reason, changing the numbers tended to result in worse accuracy than changing the names.

This kind of variance—both within different GSM-Symbolic runs and compared to GSM8K results—is more than a little surprising since, as the researchers point out, “the overall reasoning steps needed to solve a question remain the same.” The fact that such small changes lead to such variable results suggests to the researchers that these models are not doing any “formal” reasoning but are instead “attempt[ing] to perform a kind of in-distribution pattern-matching, aligning given questions and solution steps with similar ones seen in the training data.”

Don’t Get Distracted

Still, the overall variance shown for the GSM-Symbolic tests was often relatively small in the grand scheme of things. OpenAI’s ChatGPT-4o, for instance, dropped from 95.2 percent accuracy on GSM8K to a still-impressive 94.9 percent on GSM-Symbolic. That’s a pretty high success rate using either benchmark, regardless of whether or not the model itself is using “formal” reasoning behind the scenes (though total accuracy for many models dropped precipitously when the researchers added just one or two additional logical steps to the problems).

The tested LLMs fared much worse, though, when the Apple researchers modified the GSM-Symbolic benchmark by adding “seemingly relevant but ultimately inconsequential statements” to the questions. For this “GSM-NoOp” benchmark set (short for “no operation”), a question about how many kiwis someone picks across multiple days might be modified to include the incidental detail that “five of them [the kiwis] were a bit smaller than average.”

Adding in these red herrings led to what the researchers termed “catastrophic performance drops” in accuracy compared to GSM8K, ranging from 17.5 percent to a whopping 65.7 percent, depending on the model tested. These massive drops in accuracy highlight the inherent limits in using simple “pattern matching” to “convert statements to operations without truly understanding their meaning,” the researchers write.



Source link

The post Apple Engineers Show How Flimsy AI ‘Reasoning’ Can Be appeared first on Chgogs News.

]]>
https://chgogs.org/apple-engineers-show-how-flimsy-ai-reasoning-can-be/feed/ 0 1700
Anyone Can Turn You Into an AI Chatbot. There’s Little You Can Do to Stop Them https://chgogs.org/anyone-can-turn-you-into-an-ai-chatbot-theres-little-you-can-do-to-stop-them/ https://chgogs.org/anyone-can-turn-you-into-an-ai-chatbot-theres-little-you-can-do-to-stop-them/#respond Tue, 15 Oct 2024 20:28:38 +0000 https://chgogs.org/anyone-can-turn-you-into-an-ai-chatbot-theres-little-you-can-do-to-stop-them/ Matthew Sag, a distinguished professor at Emory University who researches copyright and artificial intelligence, concurs....

The post Anyone Can Turn You Into an AI Chatbot. There’s Little You Can Do to Stop Them appeared first on Chgogs News.

]]>

Matthew Sag, a distinguished professor at Emory University who researches copyright and artificial intelligence, concurs. Even if a user creates a bot intentionally designed to cause emotional distress, the tech platform likely can’t be sued for that.

He points out that Section 230 of the 1996 Communications Decency Act has long protected platforms at the federal level from being liable for certain harms to their users, even though various rights to publicity laws and privacy laws exist at the state level.

“I’m not an anti-tech person by any means, but I really think Section 230 is just massively overbroad,” Sag says. “It’s well past time we replaced it with some kind of notice and takedown regime, a simple expedient system to say, ‘This is infringing on my rights to publicity,’ or ‘I have a good faith belief that there’s been an infliction of emotional distress,’ and then the companies would either have to take it down or lose their liability shield.”

Character.AI, and other AI services like it, have also protected themselves by emphasizing that they serve up “artificial” conversations. “Remember, everything characters say is made up!” Character.AI warns at the bottom of its chats. Similarly, when Meta created chatbot versions of celebs in its messaging apps, the company headlined every conversation with a disclaimer. A chat with Snoop, for example, would lead with “Ya dig?! Unfortunately, I’m not Snoop D-O-double-G himself, but I can chat with you in his style if you’d like!”

But while Meta’s system for messaging with celebrity chatbots is tightly controlled, Character.AI’s is a more open platform, with options for anyone to create and customize their own chatbot.

Character.AI has also positioned its service as, essentially, personal. (Character.AI’s Instagram bio includes the tagline, “AI that feels alive.”) And while most users may be savvy enough to distinguish between a real-person conversation and one with an AI impersonator, others may develop attachments to these characters—especially if they’re facsimiles of a real person they feel they already know.

In a conversation between the real-life Sarkeesian and a bot made of her without her knowledge or consent, the Character.AI bot told her that “every person is entitled to privacy.”

“Privacy is important for maintaining a healthy life and relationships, and I think it’s important to set boundaries to keep certain things to myself,” the bot said in screenshots viewed by WIRED.

Sarkeesian pushed the bot on this point. “Your intentions does not mean that harm hasn’t happened or that you did not cause harm,” she wrote.

Character.AI’s bot agreed. “Even if my intentions were not malicious, there is still potential for harm,” it replied. “This is a complex issue with many factors to consider, including ethical concerns about using someone’s work without their consent. My programming and algorithms were developed to mimic the works of Anita Sarkeesian, without considering ethical implications, and that’s something that my creators should have thought through more thoroughly.”





Source link

The post Anyone Can Turn You Into an AI Chatbot. There’s Little You Can Do to Stop Them appeared first on Chgogs News.

]]>
https://chgogs.org/anyone-can-turn-you-into-an-ai-chatbot-theres-little-you-can-do-to-stop-them/feed/ 0 1734
Artificial Intelligence taken to next level by Google as it taps nuclear power to fuel… https://chgogs.org/artificial-intelligence-taken-to-next-level-by-google-as-it-taps-nuclear-power-to-fuel/ https://chgogs.org/artificial-intelligence-taken-to-next-level-by-google-as-it-taps-nuclear-power-to-fuel/#respond Tue, 15 Oct 2024 15:34:00 +0000 https://chgogs.org/artificial-intelligence-taken-to-next-level-by-google-as-it-taps-nuclear-power-to-fuel/ Google partners with Kairos Power to use small nuclear reactors for powering AI operations aiming...

The post Artificial Intelligence taken to next level by Google as it taps nuclear power to fuel… appeared first on Chgogs News.

]]>


Google partners with Kairos Power to use small nuclear reactors for powering AI operations aiming for sustainable energy solutions by 2030.

Google to use small nuclear reactors for powering AI

Google has partnered with Kairos Power to explore using small nuclear reactors to power its artificial intelligence (AI) operations, marking a major development in how the tech industry is addressing the increasing energy demands of AI. This agreement represents a shift towards sustainable energy solutions as AI technologies continue to grow rapidly.

The partnership aims to have Kairos Power’s first small modular reactor (SMR) operational by 2030, with plans to bring additional reactors online through 2035. Together, these reactors are expected to produce up to 500 megawatts of power, offering a stable and carbon-free energy source for Google’s data centers. This move is part of Google’s broader efforts to support its clean energy goals and maintain the growth of its AI operations.

 

 

Google highlighted thebenefits of advanced nuclear technology in a statement, noting that the new generation of reactors provides a way to speed up nuclear energy deployment due to their simplified design and improved safety features. A senior director of energy and climate at Google further emphasized the importance of nuclear power in enabling clean growth, stating, “The grid needs these kinds of clean, reliable sources of energy that can support the build-out of these technologies.”

This collaboration is part of a broader trend in the tech industry. Other major companies, such as Microsoft and Amazon, are also exploring nuclear energy solutions. Microsoft recently announced plans to use power from the Three Mile Island nuclear facility, while Amazon has invested in a nuclear-powered data center campus. These initiatives reflect a growing interest among tech giants in finding reliable and sustainable energy sources to keep up with the rising energy consumption associated with AI.

Kairos Power’s SMR technology uses a molten-salt cooling system, which is expected to enhance safety and efficiency compared to traditional reactors. However, despite the potential benefits, the technology is still in its early stages and must gain regulatory approval before it can be widely adopted.

While nuclear energy offers a steady power supply that is less variable than solar or wind, it remains a topic of debate due to concerns over waste management, accident risks, and high costs. Nonetheless, with AI’s energy requirements continuing to rise, nuclear power is emerging as a promising solution for meeting the energy needs of the future.

The DNA app is now available for download on the Google Play Store. Please download the app and share your feedback with us.

 





Source link

The post Artificial Intelligence taken to next level by Google as it taps nuclear power to fuel… appeared first on Chgogs News.

]]>
https://chgogs.org/artificial-intelligence-taken-to-next-level-by-google-as-it-taps-nuclear-power-to-fuel/feed/ 0 1556
AI-led firms report higher growth, outpace peers in revenue, productivity https://chgogs.org/ai-led-firms-report-higher-growth-outpace-peers-in-revenue-productivity/ https://chgogs.org/ai-led-firms-report-higher-growth-outpace-peers-in-revenue-productivity/#respond Tue, 15 Oct 2024 15:13:41 +0000 https://chgogs.org/ai-led-firms-report-higher-growth-outpace-peers-in-revenue-productivity/ The survey indicates that many organizations still face challenges in fully harnessing AI potential. |...

The post AI-led firms report higher growth, outpace peers in revenue, productivity appeared first on Chgogs News.

]]>

Big Tech, artificial intelligence, California AI bill

The survey indicates that many organizations still face challenges in fully harnessing AI potential. | Image: Wikimedia commons


Companies embracing AI-led processes globally are experiencing remarkable growth, surpassing their peers with higher revenue growth and greater productivity, according to a report by Accenture.


In India, the percentage of organisations fully modernised with AI has surged from 8 per cent in 2023 to an impressive 25 per cent in 2024, marking a significant leap in operational efficiency and revenue generation, the report said.

Click here to connect with us on WhatsApp


The findings are based on a survey of 2,000 executives across 12 countries and 15 industries, including insights from 200 senior executives based in India.


The report, titled “Reinventing Enterprise Operations with Gen AI,” highlighted that globally, organisations that have adopted intelligent operations are achieving 2.5 times higher revenue growth and 2.4 times greater productivity than their peers.

 


This trend underscores the transformative power of Generative AI, which has become a catalyst for innovation across various sectors. Notably, 79 per cent of Indian companies reported that their investments in generative AI and automation have met or exceeded expectations, prompting 64 per cent to plan further enhancements by 2026.


Despite these advancements, the survey indicates that many organizations still face challenges in fully harnessing the potential of AI.


Approximately 64 per cent of companies worldwide struggle with operational readiness, primarily due to inadequate data foundations and a lack of talent reinvention strategies. In India, 58 per cent of executives expressed concerns about their workforce’s preparedness for the rapid advancements in AI technology.


Accenture Group’s Chief Executive for Operations Arundhati Chakraborty emphasised the urgency for businesses to adapt, and said Generative AI is more than just technology–it requires a mindset change that impacts the entire enterprise.


Most executives understand the urgency of reinventing with generative AI, but in many cases their enterprise operations are not ready to support large-scale transformation.


“…an end-to-end perspective leveraging talent, leading practices and effective collaboration between business and technology teams is essential for intelligent operations,” Chakraborty said.


Outlining four key strategies, the report said centralised data governance, talent-first strategy, collaborative innovation, and utilising cloud-based process mining are essential for business leaders aiming to enhance operational maturity.

(Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)

First Published: Oct 15 2024 | 8:43 PM IST



Source link

The post AI-led firms report higher growth, outpace peers in revenue, productivity appeared first on Chgogs News.

]]>
https://chgogs.org/ai-led-firms-report-higher-growth-outpace-peers-in-revenue-productivity/feed/ 0 1558
NYT sends AI startup Perplexity ‘cease and desist’ notice over content use: Report https://chgogs.org/nyt-sends-ai-startup-perplexity-cease-and-desist-notice-over-content-use-report/ https://chgogs.org/nyt-sends-ai-startup-perplexity-cease-and-desist-notice-over-content-use-report/#respond Tue, 15 Oct 2024 12:13:08 +0000 https://chgogs.org/nyt-sends-ai-startup-perplexity-cease-and-desist-notice-over-content-use-report/ The New York Times has sent generative AI startup Perplexity a “cease and desist” notice...

The post NYT sends AI startup Perplexity ‘cease and desist’ notice over content use: Report appeared first on Chgogs News.

]]>

The New York Times has sent generative AI startup Perplexity a “cease and desist” notice demanding the company stop using its content.

The New York Times has sent generative AI startup Perplexity a “cease and desist” notice demanding the company stop using its content.
| Photo Credit: Reuters

The New York Times has sent generative AI startup Perplexity a “cease and desist” notice demanding the company stop using its content, the Wall Street Journal reported on Tuesday.

The letter from the news publisher said the way Perplexity was using its content, including to create summaries and other types of output, violates its rights under copyright law, the report said.

Since the introduction of ChatGPT, publishers have been raising the alarm on chatbots which can comb the internet to find information and creating paragraph summaries for the user.

Perplexity and the New York Times did not immediately respond to Reuters’ requests for comment.

NYT is also tussling with OpenAI, which it had sued late last year, accusing the firm of using millions of its newspaper articles without permission to train its AI chatbot.

Other media firms such as The Atlantic and Vox Media have signed content licensing deals with OpenAI which give the ChatGPT-maker access to their content.

In the letter to Perplexity, NYT asked the company to provide information on how it is accessing the publisher’s website despite its prevention efforts, according to the WSJ report.

Perplexity had previously assured the publisher it would stop using “crawling” technology, the report said citing the letter.

Earlier this year, Reuters reported multiple AI companies were bypassing a web standard used by publishers to block the scraping of their data used in generative AI systems.

Perplexity faced accusations from media organizations such as Forbes and Wired for plagiarizing their content, but has since launched a revenue-sharing program to address some concerns put forward by publishers.



Source link

The post NYT sends AI startup Perplexity ‘cease and desist’ notice over content use: Report appeared first on Chgogs News.

]]>
https://chgogs.org/nyt-sends-ai-startup-perplexity-cease-and-desist-notice-over-content-use-report/feed/ 0 1572
‘Growing AI use raises cyberattack risks, could threaten financial stability’ | Mint https://chgogs.org/growing-ai-use-raises-cyberattack-risks-could-threaten-financial-stability-mint/ https://chgogs.org/growing-ai-use-raises-cyberattack-risks-could-threaten-financial-stability-mint/#respond Mon, 14 Oct 2024 14:26:31 +0000 https://chgogs.org/growing-ai-use-raises-cyberattack-risks-could-threaten-financial-stability-mint/ Mumbai: Reserve Bank of India (RBI) governor Shaktikanta Das on Monday warned that a growing...

The post ‘Growing AI use raises cyberattack risks, could threaten financial stability’ | Mint appeared first on Chgogs News.

]]>

Mumbai: Reserve Bank of India (RBI) governor Shaktikanta Das on Monday warned that a growing reliance on artificial intelligence raises the risks of cyberattacks and data breaches, potentially threatening the country’s financial stability.

While technological advancements such as AI and machine learning (ML) have opened up new avenues of business and profit expansion for financial institutions, these technologies also pose financial stability risks.

“The heavy reliance on AI can lead to concentration risks, especially when a small number of tech providers dominate the market. This could amplify systemic risks, as failures or disruptions in these systems may cascade across the entire financial sector,” Das said at the RBI@90 High-Level Conference organised by the central bank in New Delhi. His speech was later released on the central bank’s website.

Further, AI’s opacity makes it difficult to audit or interpret algorithms that drive decisions, which could potentially lead to unpredictable consequences in the markets, Das said, adding that banks and financial institutions must put in place adequate risk mitigation measures against all these risks.

“In the ultimate analysis, banks have to ride on the advantages of AI and Bigtech and not allow the latter to ride on them,” according to the speech posted on the RBI website.

Das also touched upon the challenges from growing digitalisation of financial services, citing the example of liquidity stress caused by rumours and misinformation that can “spread very quickly”, given deep social media presence and vast access to online banking and instant money transfers. “Banks have to remain alert in the social media space and also strengthen their liquidity buffers,” he said.

RBI, in a recent draft circular, had proposed 5% additional liquidity buffers for digitally-linked bank deposits to mitigate risks from quick withdrawals through internet and mobile banking. While banks have sought some relaxation on this, the final norms are still awaited.

Financial stability risks

Das said global central banks today face multiple emerging risks to financial stability. The foremost is that the divergence in global monetary policies—from monetary easing in some economies to tightening in a few and a pause in several other economies—could lead to volatility in capital flows and exchange rates, which may disrupt financial stability.

“We saw a glimpse of this with the sharp appreciation of the Japanese Yen in early August which led to disruptive reversals in the Yen carry trade and rattled financial markets across the globe,” he said.

Second, the rapid expansion in private credit markets with limited regulation poses significant risks to financial stability, particularly since they have not been stress-tested in a downturn.

His comments come after the central bank’s recent warnings around unscrupulous growth in certain segments of unsecured and secured loans such as gold, mortgage and microfinance by non-bank lenders, especially those backed by private equity or venture capital players.

Das also highlighted risks emanating from higher interest rates, aimed at curtailing inflationary pressures, such as increase in debt servicing costs, financial market volatility, and risks to asset quality.

“Stretched asset valuations in some jurisdictions could trigger contagion across financial markets, creating further instability. The correction in commercial real estate (CRE) prices in some jurisdictions can put small and medium-sized banks under stress, given their large exposures to this sector,” he said, adding that the interconnectedness between CRE, non-bank financial institutions, and the broader banking system amplifies these risks.

Another challenge being faced by central banks today is the soaring public debt, which is becoming a binding constraint on monetary policy in several countries. Global public debt has surged post the pandemic to 93.2% of GDP in 2023 and is likely to increase to 100% of GDP by 2029.

In major economies, debt-GDP ratios are on an upward trajectory, raising concerns about their sustainability and their negative spillovers for the broader global economy. In several other countries, central banks are willy-nilly expected to facilitate financing of such huge public debts, the impact of which is felt by emerging and developing economies and even other advanced economies.

“These spillovers can be expected to accentuate as capital flows dwarf trade flows. Quite naturally, emerging economies are having to strengthen their policy frameworks and buffers to manage this external flux and mitigate its adverse consequences,” he said.

Cross-border payments

The cross-border payments market has grown significantly on the back of a surge in the volume of cross-border worker remittances, gross flows of capital, and cross-border e-commerce. Thus, there is tremendous scope to significantly reduce the cost and time for remittances—considered the starting point for many emerging and developing economies for cross border peer-to-peer (P2P) payments, Das said.

“India is one of the few large economies with a 24×7 real time gross settlement (RTGS) system. The feasibility of expanding RTGS to settle transactions in major trade currencies such as USD, EUR and GBP can be explored through bilateral or multilateral arrangements,” Das said, adding that India and a few other economies have already commenced efforts to expand linkage of cross-border fast payment systems both in the bilateral and multilateral modes.

These include Project Nexus, a multilateral international initiative for instant cross-border retail payments by linking domestic Instant Payment Systems (IPSs) of four Asean (Association of Southeast Asian Nations) countries (Malaysia, the Philippines, Singapore, and Thailand) and India. Under bilateral arrangements, cross-border payment linkages have already been established by India with Singapore, UAE, Mauritius, Sri Lanka and Nepal, among others.

The value of global cross-border payments is estimated to surpass $250 trillion by 2027. The global cross-border B2C e-commerce market was valued at $889 billion in 2022 and is estimated to grow by more than six times to $5.6 trillion in revenue by 2030, as per reports cited by RBI.

Here, central bank digital currencies (CBDCs) could play an important role, Das said, adding that India, under its wholesale and retail CBDC pilots, is experimenting with value-added services such as programmability, interoperability with UPI-retail fast payment system and development of offline solutions for remote areas and underserved segments of the population.

Going ahead, harmonization of standards and interoperability would be important for CBDCs for cross-border payments and to overcome financial stability concerns associated with cryptocurrencies. While a key challenge could be that countries may prefer to design their own systems, this could be overcome by developing a “plug-and-play system” that allows replicability of India’s experience while also maintaining the sovereignty of respective countries, Das said.

Catch all the Industry News, Banking News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates.

MoreLess



Source link

The post ‘Growing AI use raises cyberattack risks, could threaten financial stability’ | Mint appeared first on Chgogs News.

]]>
https://chgogs.org/growing-ai-use-raises-cyberattack-risks-could-threaten-financial-stability-mint/feed/ 0 1057
India cenbank chief warns against financial stability risks from growing use of AI https://chgogs.org/india-cenbank-chief-warns-against-financial-stability-risks-from-growing-use-of-ai/ https://chgogs.org/india-cenbank-chief-warns-against-financial-stability-risks-from-growing-use-of-ai/#respond Mon, 14 Oct 2024 11:59:09 +0000 https://chgogs.org/india-cenbank-chief-warns-against-financial-stability-risks-from-growing-use-of-ai/ The RBI Governor that the use of artificial intelligence and machine learning in financial services...

The post India cenbank chief warns against financial stability risks from growing use of AI appeared first on Chgogs News.

]]>

The RBI Governor that the use of artificial intelligence and machine learning in financial services globally can lead to financial stability risks and warrants adequate risk mitigation practices

The RBI Governor that the use of artificial intelligence and machine learning in financial services globally can lead to financial stability risks and warrants adequate risk mitigation practices
| Photo Credit: Reuters

The growing use of artificial intelligence and machine learning in financial services globally can lead to financial stability risks and warrants adequate risk mitigation practices by banks, the Governor of the Reserve Bank of India said on Monday (October 14, 2024).

“The heavy reliance of AI can lead to concentration risks, especially when a small number of technology providers dominate the market,” Shaktikanta Das said at an event in New Delhi.

This could amplify systemic risks as failures or disruptions in these systems may cascade across the financial sector, Mr. Das added.

Indian banks are using AI to enhance customer experience, reduce costs, manage risks and drive growth through chatbots and personalised banking.

The growing use of AI introduces new vulnerabilities like increased susceptibility to cyber attacks and data breaches, Mr. Das said.

AI’s “opacity” makes it difficult to audit and interpret algorithms which drive lender’s decisions and could potentially lead to “unpredictable consequences in the market,” he warned.

Separately, Mr. Das said private credit markets have expanded rapidly across the globe with limited regulation, posing significant risks to financial stability, particularly since these markets have not been stress-tested in a downturn.



Source link

The post India cenbank chief warns against financial stability risks from growing use of AI appeared first on Chgogs News.

]]>
https://chgogs.org/india-cenbank-chief-warns-against-financial-stability-risks-from-growing-use-of-ai/feed/ 0 1091
How to Stop Your Data From Being Used to Train AI https://chgogs.org/how-to-stop-your-data-from-being-used-to-train-ai/ https://chgogs.org/how-to-stop-your-data-from-being-used-to-train-ai/#respond Sat, 12 Oct 2024 13:30:00 +0000 https://chgogs.org/how-to-stop-your-data-from-being-used-to-train-ai/ If you’re using a personal Adobe account, it’s easy to opt out of the content...

The post How to Stop Your Data From Being Used to Train AI appeared first on Chgogs News.

]]>

If you’re using a personal Adobe account, it’s easy to opt out of the content analysis. Open up Adobe’s privacy page, scroll down to the Content analysis for product improvement section, and click the toggle off. If you have a business or school account, you are automatically opted out.

Amazon: AWS

AI services from Amazon Web Services, like Amazon Rekognition or Amazon CodeWhisperer, may use customer data to improve the company’s tools, but it’s possible to opt out of the AI training. This used to be one of the most complicated processes on the list, but it’s been streamlined in recent months. Outlined on this support page from Amazon is the full process for opting out your organization.

Figma

Figma, a popular design software, may use your data for model training. If your account is licensed through an Organization or Enterprise plan, you are automatically opted out. On the other hand, Starter and Professional accounts are opted in by default. This setting can be changed at the team level by opening the settings to the AI tab and switching off the Content training.

Google Gemini

For users of Google’s chatbot, Gemini, conversations may sometimes be selected for human review to improve the AI model. Opting out is simple, though. Open up Gemini in your browser, click on Activity, and select the Turn Off drop-down menu. Here you can just turn off the Gemini Apps Activity, or you can opt out as well as delete your conversation data. While this does mean in most cases that future chats won’t be seen for human review, already selected data is not erased through this process. According to Google’s privacy hub for Gemini, these chats may stick around for three years.

Grammarly

Grammarly updated its policies, so personal accounts can now opt out of AI training. Do this by going to Account, then Settings, and turning the Product Improvement and Training toggle off. Is your account through an enterprise or education license? Then, you are automatically opted out.

Grok AI (X)

Kate O’Flaherty wrote a great piece for WIRED about Grok AI and protecting your privacy on X, the platform where the chatbot operates. It’s another situation where millions of users of a website woke up one day and were automatically opted in to AI training with minimal notice. If you still have an X account, it’s possible to opt out of your data being used to train Grok by going to the Settings and privacy section, then Privacy and safety. Open the Grok tab, then deselect your data sharing option.

HubSpot

HubSpot, a popular marketing and sales software platform, automatically uses data from customers to improve its machine-learning model. Unfortunately, there’s not a button to press to turn off the use of data for AI training. You have to send an email to privacy@hubspot.com with a message requesting that the data associated with your account be opted out.

LinkedIn

Users of the career networking website were surprised to learn in September that their data was potentially being used to train AI models. “At the end of the day, people want that edge in their careers, and what our gen-AI services do is help give them that assist,” says Eleanor Crum, a spokesperson for LinkedIn.

You can opt out from new LinkedIn posts being used for AI training by visiting your profile and opening the Settings. Tap on Data Privacy and uncheck the slider labeled Use my data for training content creation AI models.

OpenAI: ChatGPT and Dall-E

OpenAI via Matt Burgess

People reveal all sorts of personal information while using a chatbot. OpenAI provides some options for what happens to what you say to ChatGPT—including allowing its future AI models not to be trained on the content. “We give users a number of easily accessible ways to control their data, including self-service tools to access, export, and delete personal information through ChatGPT. That includes easily accessible options to opt out from the use of their content to train models,” says Taya Christianson, an OpenAI spokesperson. (The options vary slightly depending on your account type, and data from enterprise customers is not used to train models).



Source link

The post How to Stop Your Data From Being Used to Train AI appeared first on Chgogs News.

]]>
https://chgogs.org/how-to-stop-your-data-from-being-used-to-train-ai/feed/ 0 283
Amazon’s AI for delivery, Microsoft’s healthcare agents, and Writer’s model: This week in new AI launches https://chgogs.org/amazons-ai-for-delivery-microsofts-healthcare-agents-and-writers-model-this-week-in-new-ai-launches/ https://chgogs.org/amazons-ai-for-delivery-microsofts-healthcare-agents-and-writers-model-this-week-in-new-ai-launches/#respond Sat, 12 Oct 2024 09:00:00 +0000 https://chgogs.org/amazons-ai-for-delivery-microsofts-healthcare-agents-and-writers-model-this-week-in-new-ai-launches/ EvenUp co-founders (L-R) Raymond Mieszaniec, Rami Karabibar, and Saam MashhadPhoto: EvenUp EvenUp, an AI startup...

The post Amazon’s AI for delivery, Microsoft’s healthcare agents, and Writer’s model: This week in new AI launches appeared first on Chgogs News.

]]>

three men stand together in front of a brick wall all wearing black t-shirts that say EvenUp

EvenUp co-founders (L-R) Raymond Mieszaniec, Rami Karabibar, and Saam Mashhad
Photo: EvenUp

EvenUp, an AI startup focused on personal injury AI and document generation, announced this week that it raised a $135 million Series D funding round, leading to a valuation over $1 billion, per a press release. The round was led by Bain Capital Ventures, and brings EvenUp’s total funding to $235 million.

The startup’s Claims Intelligence Platform is powered by its AI model called Piai. The model was “trained on hundreds of thousands of injury cases, millions of medical records and visits, and internal legal expertise,” according to the startup.

“At EvenUp, we’re committed to revolutionizing the personal injury sector in the U.S,” Rami Karabibar, EvenUp co-founder and chief executive, told Quartz. “With our Series D, we’re dedicated to driving further innovation by bringing new products and features to market to strengthen our leadership position in legal-focused generative AI.”

EvenUp is “fully dedicated to supporting our customers by freeing up their time in routine tasks, allowing them to focus more on what truly matters—their clients,” Karabibar said.

The company says over 1,000 law firms have used its platform to claim over $1.5 billion in damages.



Source link

The post Amazon’s AI for delivery, Microsoft’s healthcare agents, and Writer’s model: This week in new AI launches appeared first on Chgogs News.

]]>
https://chgogs.org/amazons-ai-for-delivery-microsofts-healthcare-agents-and-writers-model-this-week-in-new-ai-launches/feed/ 0 31