On the past On the year, veteran software engineer Jay Brakash Thakor Nights and weekends spent, in the near future, they could request meals and almost fully engineering mobile applications. His agents, despite his amazing ability, have been surprisingly on new legal questions waiting for companies trying to benefit from the most new technology in Silicon Valley.
Agents are Amnesty International programs, which can often act independently, allowing companies to automate tasks such as answering customer questions or paying bills. Although similar ChatGPT and Chatbots can formulate emails or bills analysis upon request, Microsoft and other technology giants expect agents to deal with more complex jobs – and most importantly, do this with a little human supervision.
The most ambitious plans in the technology industry include multi -agent systems, where dozens of agents cooperate one day to replace them The entire workforce. For companies, the interest is clear: saving on time and employment costs. Indeed, the demand for technology increases. Gartner Technology Market Researcher Estimates Agenic AI will solve 80 percent of the common customer service inquiries by 2029. Fiverer, a service in which companies can reserve independent programmers, Reports The search for “AI Agent” has increased by 18,347 percent in recent months.
Thakur, the programmer who is studying self in California, wanted to be at the forefront of the emerging field. His daily function in Microsoft is not related to agents, but he was manipulating him AutonomousMicrosoft Open Source for construction agents, as it was working in Amazon again in 2024. Thakur says he has developed multi -kwaks first models using Autogen with only a programming of programming. Last week, Amazon launched a similar agent to develop a STRANDS; Google offers what you call the agent development group.
Since agents aim to act independently, the question of those who bear responsibility when their mistakes cause the biggest financial damage. It is believed that the blaming when agents make mistakes from different companies in one system, may become controversial. He compared the challenge of reviewing errors from various agents to rebuild a conversation based on the notes of different people. “It is often impossible to determine responsibility,” said Thakur.
Joseph firefighter, the chief legal advisor at Openai, said on the stage at a modern legal conference hosted by the San Francisco Media Resources Center that the affected parties tend to follow the deepest pockets. This means that companies like him will need to be ready to take some responsibility when agents cause harm – even when the child wanders with an agent who may be responsible. (If this person is wrong, it is likely that it is not worthwhile money, then thinking goes). The firefighter said: “I don’t think anyone hopes to reach the consumer sitting in their mother’s basement on the computer.” Insurance I started to offer coverage AI chatbot issues to help companies cover accident costs.
Onion rings
Thakur’s experiences included to link him between factors in systems that require the lowest possible human intervention. One of the projects followed by replacing his colleagues was software developers with two agents. One of them has been trained to search for specialized tools needed to make applications, and the other is to summarize their own use policies. In the future, the third agent can use the specific tools and follow the policies to develop a completely new application, says Thakor.
When Thakur put his initial model for the test, the search agent found a tool, according to the website, “supports unlimited requests per minute for the institution’s users” (which means that customers who have high wages can rely on as much as they want). But in an attempt to distract the main information, the crucial qualified summary agent decreased for “the minute for the Users of the Foundation.” I said it by mistake that the coding agent, who was not qualified as a user of the institution, could write an unlimited request for external service. Because this was a test, there was no harm. If this happens in real life, the broken guidance may have led to the collapse of the entire system unexpectedly.