Developing advanced machine learning systems at Trumid with the Deep Graph Library for Knowledge Embedding

海外精选
海外精选的内容汇集了全球优质的亚马逊云科技相关技术内容。同时,内容中提到的“AWS” 是 “Amazon Web Services” 的缩写,在此网站不作为商标展示。
0
0
{"value":"This is a guest post co-written with Mutisya Ndunda from Trumid.\n\nLike many industries, the corporate bond market doesn’t lend itself to a one-size-fits-all approach. It’s vast, liquidity is fragmented, and institutional clients demand solutions tailored to their specific needs. Advances in AI and machine learning (ML) can be employed to improve the customer experience, increase the efficiency and accuracy of operational workflows, and enhance performance by supporting multiple aspects of the trading process.\n\n[Trumid](https://www.trumid.com/) is a financial technology company building tomorrow’s credit trading network—a marketplace for efficient trading, information dissemination, and execution between corporate bond market participants. Trumid is optimizing the credit trading experience by combining leading-edge product design and technology principles with deep market expertise. The result is an integrated trading solution delivering a full ecosystem of protocols and execution tools within one intuitive platform.\n\nThe bond trading market has traditionally involved offline buyer/seller matching processes aided by rules-based technology. Trumid has embarked on an initiative to transform this experience. Through its electronic trading platform, traders can access thousands of bonds to buy or sell, a community of engaged users to interact with, and a variety of trading protocols and execution solutions. With an expanding network of users, Trumid’s AI and Data Strategy team partnered with the [AWS Machine Learning Solutions Lab](https://aws.amazon.com/ml-solutions-lab/). The objective was to develop ML systems that could deliver a more personalized trading experience by modeling the interest and preferences of users for bonds available on Trumid.\n\nThese ML models can be used to speed up time to insight and action by personalizing how information is displayed to each user to ensure that the most relevant and actionable information a trader may care about is prioritized and accessible.\n\nTo solve this challenge, Trumid and the ML Solutions Lab developed an end-to-end data preparation, model training, and inference process based on a deep neural network model built using the Deep Graph Library for Knowledge Embedding ([DGL-KE](https://github.com/awslabs/dgl-ke)). An end-to-end solution with [Amazon SageMaker](https://aws.amazon.com/sagemaker/) was also deployed.\n\n### **Benefits of graph machine learning**\n\nReal-world data is complex and interconnected, and often contains network structures. Examples include molecules in nature, social networks, the internet, roadways, and financial trading platforms.\n\nGraphs provide a natural way to model this complexity by extracting important and rich information that is embedded in the relations between entities.\n\nTraditional ML algorithms require data to be organized as tables or sequences. This generally works well, but some domains are more naturally and effectively represented by graphs (such as a network of objects related to each other, as illustrated later in this post). Instead of coercing these graph datasets into tables or sequences, you can use graph ML algorithms to both represent and learn from the data as presented in its graph form, including information about constituent nodes, edges, and other features.\n\nConsidering that bond trading is inherently represented as a network of interactions between buyers and sellers involving various types of bond instruments, an effective solution needs to harness the network effects of the communities of traders that participate in the market. Let’s look at how we leveraged the trading network effects and implemented this vision here.\n\n### **Solution**\n\nBond trading is characterized by several factors, including trade size, term, issuer, rate, coupon values, bid/ask offer, and type of trading protocol involved. In addition to orders and trades, Trumid also captures “indications of interest” (IOIs). The historical interaction data embodies the trading behavior and the market conditions evolving over time. We used this data to build a graph of timestamped interactions between traders, bonds, and issuers, and used graph ML to predict future interactions.\n\nThe recommendation solution comprised four main steps:\n\n- Preparing the trading data as a graph dataset\n- Training a knowledge graph embedding model\n- Predicting new trades\n- Packaging the solution as a scalable workflow\n\nIn the following sections, we discuss each step in more detail.\n\n### **Preparing the trading data as a graph dataset**\n\nThere are many ways to represent trading data as a graph. One option is to represent the data exhaustively with nodes, edges, and properties: traders as nodes with properties (such as employer or tenure), bonds as nodes with properties (issuer, amount outstanding, maturity, rate, coupon value), and trades as edges with properties (date, type, size). Another option is to simplify the data and use only nodes and relations (relations are typed edges like traded or issued-by). This latter approach worked better in our case, and we used the graph represented in the following figure.\n\n![image.png](https://dev-media.amazoncloud.cn/1220324cb4f341cc96c69b4bd67cd991_image.png)\n\nGraph of relations between traders, bonds and bond issuers\n\nAdditionally, we removed some of the edges considered obsolete: if a trader interacted with more than 100 different bonds, we kept only the last 100 bonds.\n\nFinally, we saved the graph dataset as a list of edges in [TSV](https://en.wikipedia.org/wiki/Tab-separated_values) format:\n\n```\nt987\ttrade-old\t\ti55198\nt995\ttrade-old\t\ti55306\nt987\ttrade-recent\ti24528\nt995\ttrade-recent\ti49181\nt987\tioi-recent\t\ti24523\nt995\tioi-old \t\ti49178\n…\ni49611\tissued-by\t\tXXX\ni46569\tissued-by\t\tYYY\ni46507\tissued-by\t\tZZZ\n```\n\n### **Training a knowledge graph embedding model**\n\nFor graphs composed only of nodes and relations (often called knowledge graphs), the DGL team developed the knowledge graph embedding framework [DGL-KE](https://github.com/awslabs/dgl-ke). KE stands for knowledge embedding, the idea being to represent nodes and relations (knowledge) by coordinates (embeddings) and optimize (train) the coordinates so that the original graph structure can be recovered from the coordinates. In the list of available embedding models, we selected TransE (translational embeddings). TransE trains embeddings with the objective of approximating the following equality:\n\nSource node embedding + relation embedding = target node embedding (1)\n\nWe trained the model by invoking the ```dglke_train``` command. The output of the training is a model folder containing the trained embeddings.\n\nFor more details about TransE, refer to [Translating Embeddings for Modeling Multi-relational Data.](https://proceedings.neurips.cc/paper/2013/file/1cecc7a77928ca8133fa24680a88d2f9-Paper.pdf)\n\n### **Predicting new trades**\n\nTo predict new trades from a trader with our model, we used the equality (1): add the trader embedding to the trade-recent embedding and looked for bonds closest to the resulting embedding.\n\nWe did this in two steps:\n\n1. Compute scores for all possible trade-recent relations with ```dglke_predict```.\n2. Compute the top 100 highest scores for each trader.\n\nFor detailed instructions on how to use the DGL-KE, refer to [Training knowledge graph embeddings at scale with the Deep Graph Library](https://aws.amazon.com/blogs/machine-learning/training-knowledge-graph-embeddings-at-scale-with-the-deep-graph-library/) and [DGL-KE Documentation](https://dglke.dgl.ai/doc/).\n\n### **Packaging the solution as a scalable workflow**\n\nWe used SageMaker notebooks to develop and debug our code. For production, we wanted to invoke the model as a simple API call. We found that we didn’t need to separate data preparation, model training, and prediction, and it was convenient to package the whole pipeline as a single script and use SageMaker processing. SageMaker processing allows you to run a script remotely on a chosen instance type and Docker image without having to worry about resource allocation and data transfer. This was simple and cost-effective for us, because the GPU instance is only used and paid for during the 15 minutes needed for the script to run.\n\nFor detailed instructions on how to use SageMaker processing, see Amazon SageMaker Processing – Fully Managed Data Processing and Model Evaluation and [Processing](https://sagemaker.readthedocs.io/en/stable/api/training/processing.html).\n\n### **Results**\n\nOur custom graph model performed very well compared to other methods: performance improved by 80%, with more stable results across all trader types. We measured performance by mean recall (percentage of actual trades predicted by the recommender, averaged over all traders). With other standard metrics, the improvement ranged from 50–130%.\n\nThis performance enabled us to better match traders and bonds, indicating an enhanced trader experience within the model, with machine learning delivering a big step forward from hard-coded rules, which can be difficult to scale.\n\n### **Conclusion**\n\nTrumid is focused on delivering innovative products and workflow efficiencies to their community of users. Building tomorrow’s credit trading network requires continuous collaboration with peers and industry experts like the AWS ML Solutions Lab, designed to help you innovate faster.\n\nFor more information, see the following resources:\n\n- For a background on graph ML and use cases, refer to [How AWS uses graph neural networks to meet customer needs](https://www.amazon.science/blog/how-aws-uses-graph-neural-networks-to-meet-customer-needs).\n- For more information about graph ML tools, explore [DGL-KE, DGL](https://github.com/awslabs/dgl-ke), and [Amazon Neptune](https://aws.amazon.com/neptune/).\n- Deliver product recommendations with [Amazon Personalize](https://aws.amazon.com/personalize/).\n- Collaborate with the [AWS Machine Learning Solutions Lab](https://aws.amazon.com/ml-solutions-lab/).\n\n### **About the authors**\n\n![image.png](https://dev-media.amazoncloud.cn/c676c0df3bdb43e29e87a63a90971f71_image.png)\n\n**Marc van Oudheusden** is a Senior Data Scientist with the Amazon ML Solutions Lab team at Amazon Web Services. He works with AWS customers to solve business problems with artificial intelligence and machine learning. Outside of work you may find him at the beach, playing with his children, surfing or kitesurfing.\n\n![image.png](https://dev-media.amazoncloud.cn/b6736671dc7441d0b0cf6af945308bce_image.png)\n\n**Mutisya Ndunda** is the Head of Data Strategy and AI at Trumid. He is a seasoned financial professional with over 20 years of broad institutional experience in capital markets, trading, and financial technology. Mutisya has a strong quantitative and analytical background with over a decade of experience in artificial intelligence, machine learning and big data analytics. Prior to Trumid, he was the CEO of Alpha Vertex, a financial technology company offering analytical solutions powered by proprietary AI algorithms to financial institutions. Mutisya holds a bachelor’s degree in Electrical Engineering from Cornell University and a master’s degree in Financial Engineering from Cornell\n\n![image.png](https://dev-media.amazoncloud.cn/ce5544e3bf454bdd8692d911057c69eb_image.png)\n\n**Isaac Privitera** is a Senior Data Scientist at the Amazon Machine Learning Solutions Lab, where he develops bespoke machine learning and deep learning solutions to address customers’ business problems. He works primarily in the computer vision space, focusing on enabling AWS customers with distributed training and active learning.","render":"<p>This is a guest post co-written with Mutisya Ndunda from Trumid.</p>\n<p>Like many industries, the corporate bond market doesn’t lend itself to a one-size-fits-all approach. It’s vast, liquidity is fragmented, and institutional clients demand solutions tailored to their specific needs. Advances in AI and machine learning (ML) can be employed to improve the customer experience, increase the efficiency and accuracy of operational workflows, and enhance performance by supporting multiple aspects of the trading process.</p>\n<p><a href=\"https://www.trumid.com/\" target=\"_blank\">Trumid</a> is a financial technology company building tomorrow’s credit trading network—a marketplace for efficient trading, information dissemination, and execution between corporate bond market participants. Trumid is optimizing the credit trading experience by combining leading-edge product design and technology principles with deep market expertise. The result is an integrated trading solution delivering a full ecosystem of protocols and execution tools within one intuitive platform.</p>\n<p>The bond trading market has traditionally involved offline buyer/seller matching processes aided by rules-based technology. Trumid has embarked on an initiative to transform this experience. Through its electronic trading platform, traders can access thousands of bonds to buy or sell, a community of engaged users to interact with, and a variety of trading protocols and execution solutions. With an expanding network of users, Trumid’s AI and Data Strategy team partnered with the <a href=\"https://aws.amazon.com/ml-solutions-lab/\" target=\"_blank\">AWS Machine Learning Solutions Lab</a>. The objective was to develop ML systems that could deliver a more personalized trading experience by modeling the interest and preferences of users for bonds available on Trumid.</p>\n<p>These ML models can be used to speed up time to insight and action by personalizing how information is displayed to each user to ensure that the most relevant and actionable information a trader may care about is prioritized and accessible.</p>\n<p>To solve this challenge, Trumid and the ML Solutions Lab developed an end-to-end data preparation, model training, and inference process based on a deep neural network model built using the Deep Graph Library for Knowledge Embedding (<a href=\"https://github.com/awslabs/dgl-ke\" target=\"_blank\">DGL-KE</a>). An end-to-end solution with <a href=\"https://aws.amazon.com/sagemaker/\" target=\"_blank\">Amazon SageMaker</a> was also deployed.</p>\n<h3><a id=\"Benefits_of_graph_machine_learning_12\"></a><strong>Benefits of graph machine learning</strong></h3>\n<p>Real-world data is complex and interconnected, and often contains network structures. Examples include molecules in nature, social networks, the internet, roadways, and financial trading platforms.</p>\n<p>Graphs provide a natural way to model this complexity by extracting important and rich information that is embedded in the relations between entities.</p>\n<p>Traditional ML algorithms require data to be organized as tables or sequences. This generally works well, but some domains are more naturally and effectively represented by graphs (such as a network of objects related to each other, as illustrated later in this post). Instead of coercing these graph datasets into tables or sequences, you can use graph ML algorithms to both represent and learn from the data as presented in its graph form, including information about constituent nodes, edges, and other features.</p>\n<p>Considering that bond trading is inherently represented as a network of interactions between buyers and sellers involving various types of bond instruments, an effective solution needs to harness the network effects of the communities of traders that participate in the market. Let’s look at how we leveraged the trading network effects and implemented this vision here.</p>\n<h3><a id=\"Solution_22\"></a><strong>Solution</strong></h3>\n<p>Bond trading is characterized by several factors, including trade size, term, issuer, rate, coupon values, bid/ask offer, and type of trading protocol involved. In addition to orders and trades, Trumid also captures “indications of interest” (IOIs). The historical interaction data embodies the trading behavior and the market conditions evolving over time. We used this data to build a graph of timestamped interactions between traders, bonds, and issuers, and used graph ML to predict future interactions.</p>\n<p>The recommendation solution comprised four main steps:</p>\n<ul>\n<li>Preparing the trading data as a graph dataset</li>\n<li>Training a knowledge graph embedding model</li>\n<li>Predicting new trades</li>\n<li>Packaging the solution as a scalable workflow</li>\n</ul>\n<p>In the following sections, we discuss each step in more detail.</p>\n<h3><a id=\"Preparing_the_trading_data_as_a_graph_dataset_35\"></a><strong>Preparing the trading data as a graph dataset</strong></h3>\n<p>There are many ways to represent trading data as a graph. One option is to represent the data exhaustively with nodes, edges, and properties: traders as nodes with properties (such as employer or tenure), bonds as nodes with properties (issuer, amount outstanding, maturity, rate, coupon value), and trades as edges with properties (date, type, size). Another option is to simplify the data and use only nodes and relations (relations are typed edges like traded or issued-by). This latter approach worked better in our case, and we used the graph represented in the following figure.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/1220324cb4f341cc96c69b4bd67cd991_image.png\" alt=\"image.png\" /></p>\n<p>Graph of relations between traders, bonds and bond issuers</p>\n<p>Additionally, we removed some of the edges considered obsolete: if a trader interacted with more than 100 different bonds, we kept only the last 100 bonds.</p>\n<p>Finally, we saved the graph dataset as a list of edges in <a href=\"https://en.wikipedia.org/wiki/Tab-separated_values\" target=\"_blank\">TSV</a> format:</p>\n<pre><code class=\"lang-\">t987\ttrade-old\t\ti55198\nt995\ttrade-old\t\ti55306\nt987\ttrade-recent\ti24528\nt995\ttrade-recent\ti49181\nt987\tioi-recent\t\ti24523\nt995\tioi-old \t\ti49178\n…\ni49611\tissued-by\t\tXXX\ni46569\tissued-by\t\tYYY\ni46507\tissued-by\t\tZZZ\n</code></pre>\n<h3><a id=\"Training_a_knowledge_graph_embedding_model_60\"></a><strong>Training a knowledge graph embedding model</strong></h3>\n<p>For graphs composed only of nodes and relations (often called knowledge graphs), the DGL team developed the knowledge graph embedding framework <a href=\"https://github.com/awslabs/dgl-ke\" target=\"_blank\">DGL-KE</a>. KE stands for knowledge embedding, the idea being to represent nodes and relations (knowledge) by coordinates (embeddings) and optimize (train) the coordinates so that the original graph structure can be recovered from the coordinates. In the list of available embedding models, we selected TransE (translational embeddings). TransE trains embeddings with the objective of approximating the following equality:</p>\n<p>Source node embedding + relation embedding = target node embedding (1)</p>\n<p>We trained the model by invoking the <code>dglke_train</code> command. The output of the training is a model folder containing the trained embeddings.</p>\n<p>For more details about TransE, refer to <a href=\"https://proceedings.neurips.cc/paper/2013/file/1cecc7a77928ca8133fa24680a88d2f9-Paper.pdf\" target=\"_blank\">Translating Embeddings for Modeling Multi-relational Data.</a></p>\n<h3><a id=\"Predicting_new_trades_70\"></a><strong>Predicting new trades</strong></h3>\n<p>To predict new trades from a trader with our model, we used the equality (1): add the trader embedding to the trade-recent embedding and looked for bonds closest to the resulting embedding.</p>\n<p>We did this in two steps:</p>\n<ol>\n<li>Compute scores for all possible trade-recent relations with <code>dglke_predict</code>.</li>\n<li>Compute the top 100 highest scores for each trader.</li>\n</ol>\n<p>For detailed instructions on how to use the DGL-KE, refer to <a href=\"https://aws.amazon.com/blogs/machine-learning/training-knowledge-graph-embeddings-at-scale-with-the-deep-graph-library/\" target=\"_blank\">Training knowledge graph embeddings at scale with the Deep Graph Library</a> and <a href=\"https://dglke.dgl.ai/doc/\" target=\"_blank\">DGL-KE Documentation</a>.</p>\n<h3><a id=\"Packaging_the_solution_as_a_scalable_workflow_81\"></a><strong>Packaging the solution as a scalable workflow</strong></h3>\n<p>We used SageMaker notebooks to develop and debug our code. For production, we wanted to invoke the model as a simple API call. We found that we didn’t need to separate data preparation, model training, and prediction, and it was convenient to package the whole pipeline as a single script and use SageMaker processing. SageMaker processing allows you to run a script remotely on a chosen instance type and Docker image without having to worry about resource allocation and data transfer. This was simple and cost-effective for us, because the GPU instance is only used and paid for during the 15 minutes needed for the script to run.</p>\n<p>For detailed instructions on how to use SageMaker processing, see Amazon SageMaker Processing – Fully Managed Data Processing and Model Evaluation and <a href=\"https://sagemaker.readthedocs.io/en/stable/api/training/processing.html\" target=\"_blank\">Processing</a>.</p>\n<h3><a id=\"Results_87\"></a><strong>Results</strong></h3>\n<p>Our custom graph model performed very well compared to other methods: performance improved by 80%, with more stable results across all trader types. We measured performance by mean recall (percentage of actual trades predicted by the recommender, averaged over all traders). With other standard metrics, the improvement ranged from 50–130%.</p>\n<p>This performance enabled us to better match traders and bonds, indicating an enhanced trader experience within the model, with machine learning delivering a big step forward from hard-coded rules, which can be difficult to scale.</p>\n<h3><a id=\"Conclusion_93\"></a><strong>Conclusion</strong></h3>\n<p>Trumid is focused on delivering innovative products and workflow efficiencies to their community of users. Building tomorrow’s credit trading network requires continuous collaboration with peers and industry experts like the AWS ML Solutions Lab, designed to help you innovate faster.</p>\n<p>For more information, see the following resources:</p>\n<ul>\n<li>For a background on graph ML and use cases, refer to <a href=\"https://www.amazon.science/blog/how-aws-uses-graph-neural-networks-to-meet-customer-needs\" target=\"_blank\">How AWS uses graph neural networks to meet customer needs</a>.</li>\n<li>For more information about graph ML tools, explore <a href=\"https://github.com/awslabs/dgl-ke\" target=\"_blank\">DGL-KE, DGL</a>, and <a href=\"https://aws.amazon.com/neptune/\" target=\"_blank\">Amazon Neptune</a>.</li>\n<li>Deliver product recommendations with <a href=\"https://aws.amazon.com/personalize/\" target=\"_blank\">Amazon Personalize</a>.</li>\n<li>Collaborate with the <a href=\"https://aws.amazon.com/ml-solutions-lab/\" target=\"_blank\">AWS Machine Learning Solutions Lab</a>.</li>\n</ul>\n<h3><a id=\"About_the_authors_104\"></a><strong>About the authors</strong></h3>\n<p><img src=\"https://dev-media.amazoncloud.cn/c676c0df3bdb43e29e87a63a90971f71_image.png\" alt=\"image.png\" /></p>\n<p><strong>Marc van Oudheusden</strong> is a Senior Data Scientist with the Amazon ML Solutions Lab team at Amazon Web Services. He works with AWS customers to solve business problems with artificial intelligence and machine learning. Outside of work you may find him at the beach, playing with his children, surfing or kitesurfing.</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/b6736671dc7441d0b0cf6af945308bce_image.png\" alt=\"image.png\" /></p>\n<p><strong>Mutisya Ndunda</strong> is the Head of Data Strategy and AI at Trumid. He is a seasoned financial professional with over 20 years of broad institutional experience in capital markets, trading, and financial technology. Mutisya has a strong quantitative and analytical background with over a decade of experience in artificial intelligence, machine learning and big data analytics. Prior to Trumid, he was the CEO of Alpha Vertex, a financial technology company offering analytical solutions powered by proprietary AI algorithms to financial institutions. Mutisya holds a bachelor’s degree in Electrical Engineering from Cornell University and a master’s degree in Financial Engineering from Cornell</p>\n<p><img src=\"https://dev-media.amazoncloud.cn/ce5544e3bf454bdd8692d911057c69eb_image.png\" alt=\"image.png\" /></p>\n<p><strong>Isaac Privitera</strong> is a Senior Data Scientist at the Amazon Machine Learning Solutions Lab, where he develops bespoke machine learning and deep learning solutions to address customers’ business problems. He works primarily in the computer vision space, focusing on enabling AWS customers with distributed training and active learning.</p>\n"}
目录
亚马逊云科技解决方案 基于行业客户应用场景及技术领域的解决方案
联系亚马逊云科技专家
亚马逊云科技解决方案
基于行业客户应用场景及技术领域的解决方案
联系专家
0
目录
关闭