Commodities

Future of trading: Making big data work for commodities

Michael Adjemian, an associate professor of agricultural and applied economics at the University of Georgia, says data collection within agriculture is changing both the agricultural industry and the world of trading. “Satellite data on weather and production patterns, as well as data generated by GPS-equipped tractors, drones and soil-sensing applications, are regularly integrated into real-time dashboards to improve the information available to decision makers in the financial economies,” he says. Adjemian adds that this will be crucial given growing worldwide demands for food and the challenge of the climate crisis.

But collecting masses of data causes its own challenges. The volume of data has become incomprehensible for humans, and being able to make the most of it requires being able to effectively analyse it. It’s now crucial that data can work with other data. “The competitive advantage is really evolving from being able to access those additional sources of data, to how well companies can integrate it, commingle it with the data that they already produce themselves, and then apply technology to generate insights,” Refinitiv’s Sanos says.

Preparing for success: getting your data ready for business

Commodities trading has been transformed in the last two decades – but there is even greater change coming. Artificial intelligence, sophisticated analytics tools and vastly improved visualisation methods point to a future that includes increased automation – but this doesn’t necessarily mean replacing human traders and analysts. Automation can include far more effective data processing and a sophisticated application of available information. Essentially,
machines can mine data for intelligence; humans can then act upon it.

But the use of data can’t be supercharged if its fundamental components are missing. Companies and traders need to properly ingest data into their systems, make sure it is standardised, use data science to analyse it, and understand the tools needed to visualise this analysis and make it understandable for people on the ground.

The task of preparing data for the future isn’t an easy one, Sanos says. It is, however, a challenge that Refintiv has a solution for. Refinitiv’s Data Management Solution (RDMS) is a platform that allows companies to combine Refinitiv’s multiple sources of commodity data into their own processes. Data can be merged and normalised, and it acts as a way for clients to standardise data from Refinitiv alongside third-party data and their own information.

Disparate data united

The data explosion has created new challenges for companies. “You also need somewhere to put it all,” says Refinitiv’s Henson. Companies need to make the most of the data they are collecting – or accessing from third parties such as Refinitiv – and need to create suitable environments where it can be collected and aggregated. However, Henson explains, increasingly companies are looking to access and manipulate data remotely, adding that traders and analysts want to connect to data feeds – and then ingest it into even bigger databases.

It’s here where the cloud’s quick startup times and seamless ability to expand as required come into play. By taking advantage of cloud hosting and making data available in Application Programming Interfaces (APIs), it is possible for traders to easily access huge data streams and use them in the ways they desire. For instance, Refinitiv’s RDMS allows analysts to create ways to see how much oil is being moved from one location to another. Henson also explains that Refinitiv provides data environments in its terminal product, Eikon, which holds more than 2,000 pricing data sources, with 1,300 providers sending reports. It can also be used in virtual offices to aid remote use. The company estimates that Eikon Data can be accessed and shared easily, and subsequently acted upon. This data can then be accessed and shared easily, and subsequently acted upon.

Building insights on standardised data

Making one set of data work with other sets of data isn’t an easy task. As an example, data must first be normalised, and fields in one database need to match those in another database if they are going to be successfully combined. If that doesn’t happen, the result may be an inaccurate picture of the world – and it may subsequently be harder for traders to make accurate decisions as a result.

Sanos says the chief technology officers he speaks to are frustrated about the amount of time their data scientists and staff are spending cleaning up datasets in order to optimise their usefulness. “They’re telling me their analysts spend up to 90 per cent of their time just doing this aggregation and normalisation of data,” he explains. Spending longer making data compatible means less time is spent analysing the data and making decisions based on the intelligence it provides. For many companies, the process of amassing data sources and standardising the information is best left to third parties. However, Refinitiv has dedicated teams – located in Bangalore, Manila, India and Poland – that analyse incoming datasets and ensure they are compatible. They take disparate sources of data and make it possible for them to be easily compared to other data.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
SUBSCRIBE TO OUR NEWSLETTER

Get our latest downloads and information first. Complete the form below to subscribe to our weekly newsletter.


    Input this code: captcha