{"id":83143,"date":"2025-08-01T18:01:31","date_gmt":"2025-08-01T12:31:31","guid":{"rendered":"https:\/\/www.the-next-tech.com\/?p=83143"},"modified":"2025-08-01T18:01:31","modified_gmt":"2025-08-01T12:31:31","slug":"how-i-download-use-glm-4-5-locally","status":"publish","type":"post","link":"https:\/\/www.the-next-tech.com\/review\/how-i-download-use-glm-4-5-locally\/","title":{"rendered":"How I Download &#038; Use GLM 4.5 Locally? Step By Step Guide"},"content":{"rendered":"<p>Another gracious Large Language Model developed by Chinese AI company, Zhipu AI introduced two advanced model to date. The <a href=\"https:\/\/www.the-next-tech.com\/artificial-intelligence\/what-is-glm-4-5-and-4-5-air\/\" target=\"_blank\" rel=\"noopener\">GLM 4.5 and GLM 4.5 Air<\/a> are said to be the most intelligent model for use cases like reasoning, coding, and agentic use case.<\/p>\n<p><strong>The model is not currently available as a software to download. Developers can access through its official website, Hugging Face or Github repo.<\/strong><\/p>\n<p>In between, I figure out the way to download GLM 4.5 and run locally on your computer. Yes, you heard it right!<\/p>\n<div class=\"question-listing\" style=\"border: 1px solid #DC2166; padding: 20px 30px 20px 50px; margin: 30px 0; background: rgb(220 33 102 \/ 6%); box-shadow: 0px 5px 20px rgb(0 0 0 \/ 20%); border-radius: 5px; position: relative;\">\n<div class=\"question-mark\" style=\"width: 30px; height: 30px; color: #fff; display: inline-block; text-align: center; line-height: 30px; border-radius: 50%; background: #DC2166; position: absolute; right: -10px; top: -13px;\">!<\/div>\n<p><span id=\"Future_Of_IT_Companies\" class=\"ez-toc-section\"><\/span>This article show you step by step to access GLM 4.5 download using python. By the end, you feel confident and able to run GLM 4.5 model locally on your computer.<\/p>\n<\/div>\n<p><strong>Critical System Requirements<\/strong><\/p>\n<ul>\n<li><strong>Python:<\/strong> 3.9 or higher.<\/li>\n<li><strong>PIP:<\/strong> Latest version for proper dependency resolution.<\/li>\n<li><strong>VRAM:<\/strong> At least 16GB recommended for smooth performance.<\/li>\n<li><strong>GPU Required:<\/strong> vLLM only supports CUDA GPUs (NVIDIA RTX 30\/40 series).<\/li>\n<\/ul>\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_17 counter-hierarchy counter-decimal ez-toc-white\">\n<div class=\"ez-toc-title-container\">\n<p class=\"ez-toc-title\">Table of Contents<\/p>\n<span class=\"ez-toc-title-toggle\"><a class=\"ez-toc-pull-right ez-toc-btn ez-toc-btn-xs ez-toc-btn-default ez-toc-toggle\" style=\"display: none;\"><i class=\"ez-toc-glyphicon ez-toc-icon-toggle\"><\/i><\/a><\/span><\/div>\n<nav><ul class=\"ez-toc-list ez-toc-list-level-1\"><li class=\"ez-toc-page-1 ez-toc-heading-level-2\"><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/www.the-next-tech.com\/review\/how-i-download-use-glm-4-5-locally\/#Step_1_-_Download_GLM_45_Model_From_Source_Forge\" title=\"Step 1 &#8211; Download GLM 4.5 Model From Source Forge\">Step 1 &#8211; Download GLM 4.5 Model From Source Forge<\/a><\/li><li class=\"ez-toc-page-1 ez-toc-heading-level-2\"><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/www.the-next-tech.com\/review\/how-i-download-use-glm-4-5-locally\/#Step_2_-_Install_Python_39\" title=\"Step 2 &#8211; Install Python 3.9\">Step 2 &#8211; Install Python 3.9<\/a><\/li><li class=\"ez-toc-page-1 ez-toc-heading-level-2\"><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/www.the-next-tech.com\/review\/how-i-download-use-glm-4-5-locally\/#Step_3_-_Install_Required_Dependencies\" title=\"Step 3 &#8211; Install Required Dependencies\">Step 3 &#8211; Install Required Dependencies<\/a><\/li><li class=\"ez-toc-page-1 ez-toc-heading-level-2\"><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/www.the-next-tech.com\/review\/how-i-download-use-glm-4-5-locally\/#Step_4_-_Start_the_vLLM_Model_Server\" title=\"Step 4 &#8211; Start the vLLM Model Server\">Step 4 &#8211; Start the vLLM Model Server<\/a><\/li><li class=\"ez-toc-page-1 ez-toc-heading-level-2\"><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/www.the-next-tech.com\/review\/how-i-download-use-glm-4-5-locally\/#Step_5_-_Rename_configexamplejson_file\" title=\"Step 5 &#8211; Rename \u201cconfig.example.json\u201d file\">Step 5 &#8211; Rename \u201cconfig.example.json\u201d file<\/a><\/li><li class=\"ez-toc-page-1 ez-toc-heading-level-2\"><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/www.the-next-tech.com\/review\/how-i-download-use-glm-4-5-locally\/#Step_6_-_Run_Inference\" title=\"Step 6 &#8211; Run Inference\">Step 6 &#8211; Run Inference<\/a><\/li><li class=\"ez-toc-page-1 ez-toc-heading-level-2\"><a class=\"ez-toc-link ez-toc-heading-7\" href=\"https:\/\/www.the-next-tech.com\/review\/how-i-download-use-glm-4-5-locally\/#GLM_45_Benchmarks\" title=\"GLM 4.5 Benchmarks\">GLM 4.5 Benchmarks<\/a><\/li><li class=\"ez-toc-page-1 ez-toc-heading-level-2\"><a class=\"ez-toc-link ez-toc-heading-8\" href=\"https:\/\/www.the-next-tech.com\/review\/how-i-download-use-glm-4-5-locally\/#Frequently_Asked_Questions\" title=\"Frequently Asked Questions\">Frequently Asked Questions<\/a><\/li><\/ul><\/nav><\/div>\n<h2><span class=\"ez-toc-section\" id=\"Step_1_-_Download_GLM_45_Model_From_Source_Forge\"><\/span><strong>Step 1 &#8211; Download GLM 4.5 Model From Source Forge<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>You can download GLM 4.5 from its official SourceForge project page. Once downloaded and extracted, your project folder will include<\/p>\n<ul>\n<li>.github\/ folder<\/li>\n<li>example\/ folder<\/li>\n<li>inference\/ folder<\/li>\n<li>resources\/ folder<\/li>\n<li>.gitignore file<\/li>\n<li>requirements.txt<\/li>\n<li>.pre-commit-config.yaml<\/li>\n<li>License and Readme file<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Step_2_-_Install_Python_39\"><\/span><strong>Step 2 &#8211; Install Python 3.9<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>The GLM 4.5 local setup uses modern Python libraries like transformers, vllm, and accelerate, which require Python 3.9 or later.<\/p>\n<ul>\n<li>Go to official Python.org to download the version.<\/li>\n<li>Download the Windows installer file.<\/li>\n<li>During installation, check the box that says &#8220;Add Python to PATH&#8221;.<\/li>\n<\/ul>\n<p><img loading=\"lazy\" class=\"aligncenter wp-image-83145 size-full\" src=\"https:\/\/s3.amazonaws.com\/static.the-next-tech.com\/wp-content\/uploads\/2025\/08\/01174842\/Download-python-3.9-version-e1754050753110.png\" alt=\"Download python 3.9 version\" width=\"1245\" height=\"305\" srcset=\"https:\/\/s3.amazonaws.com\/static.the-next-tech.com\/wp-content\/uploads\/2025\/08\/01174842\/Download-python-3.9-version-e1754050753110.png 1245w, https:\/\/s3.amazonaws.com\/static.the-next-tech.com\/wp-content\/uploads\/2025\/08\/01174842\/Download-python-3.9-version-e1754050753110-300x73.png 300w, https:\/\/s3.amazonaws.com\/static.the-next-tech.com\/wp-content\/uploads\/2025\/08\/01174842\/Download-python-3.9-version-e1754050753110-1024x251.png 1024w, https:\/\/s3.amazonaws.com\/static.the-next-tech.com\/wp-content\/uploads\/2025\/08\/01174842\/Download-python-3.9-version-e1754050753110-768x188.png 768w, https:\/\/s3.amazonaws.com\/static.the-next-tech.com\/wp-content\/uploads\/2025\/08\/01174842\/Download-python-3.9-version-e1754050753110-150x37.png 150w\" sizes=\"(max-width: 1245px) 100vw, 1245px\" \/><\/p>\n<h2><span class=\"ez-toc-section\" id=\"Step_3_-_Install_Required_Dependencies\"><\/span><strong>Step 3 &#8211; Install Required Dependencies<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>In your terminal, navigate to the project folder and run:<\/p>\n<p><em>pip install -r requirements.txt<\/em><\/p>\n<p>After installing necessary files, run this:<\/p>\n<p><em>pip install -U vllm &#8211;pre &#8211;extra-index-url https:\/\/wheels.vllm.ai\/nightly<\/em><\/p>\n<p><span class=\"seethis_lik\"><strong>Note:<\/strong> This enable the streaming support for vllm.<\/span><\/p>\n<h2><span class=\"ez-toc-section\" id=\"Step_4_-_Start_the_vLLM_Model_Server\"><\/span><strong>Step 4 &#8211; Start the vLLM Model Server<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Start the core vllm model server that Zhipu AI internally use. This launches a local OpenAI-compatible API server on http:\/\/127.0.0.1:8000<\/p>\n<p><em>python -m vllm.entrypoints.openai.api_server &#8211;model THUDM\/glm-4-5<\/em><\/p>\n<p><span class=\"seethis_lik\"><strong>Note:<\/strong> If the model isn&#8217;t pre-downloaded, vLLM will fetch it automatically from Hugging Face.<\/span><\/p>\n<h2><span class=\"ez-toc-section\" id=\"Step_5_-_Rename_configexamplejson_file\"><\/span><strong>Step 5 &#8211; Rename \u201cconfig.example.json\u201d file<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Inside the Example folder, rename \u201cconfig.example.json\u201d to \u201cconfig.json\u201d and no edit in the file required.<\/p>\n<p><img loading=\"lazy\" class=\"aligncenter wp-image-83146 size-full\" src=\"https:\/\/s3.amazonaws.com\/static.the-next-tech.com\/wp-content\/uploads\/2025\/08\/01175128\/Rename-the-file-e1754050973599.png\" alt=\"Rename the file\" width=\"1245\" height=\"448\" srcset=\"https:\/\/s3.amazonaws.com\/static.the-next-tech.com\/wp-content\/uploads\/2025\/08\/01175128\/Rename-the-file-e1754050973599.png 1245w, https:\/\/s3.amazonaws.com\/static.the-next-tech.com\/wp-content\/uploads\/2025\/08\/01175128\/Rename-the-file-e1754050973599-300x108.png 300w, https:\/\/s3.amazonaws.com\/static.the-next-tech.com\/wp-content\/uploads\/2025\/08\/01175128\/Rename-the-file-e1754050973599-1024x368.png 1024w, https:\/\/s3.amazonaws.com\/static.the-next-tech.com\/wp-content\/uploads\/2025\/08\/01175128\/Rename-the-file-e1754050973599-768x276.png 768w, https:\/\/s3.amazonaws.com\/static.the-next-tech.com\/wp-content\/uploads\/2025\/08\/01175128\/Rename-the-file-e1754050973599-20x8.png 20w, https:\/\/s3.amazonaws.com\/static.the-next-tech.com\/wp-content\/uploads\/2025\/08\/01175128\/Rename-the-file-e1754050973599-150x54.png 150w\" sizes=\"(max-width: 1245px) 100vw, 1245px\" \/><\/p>\n<h2><span class=\"ez-toc-section\" id=\"Step_6_-_Run_Inference\"><\/span><strong>Step 6 &#8211; Run Inference<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>That\u2019s all. Now run the inference with the following command. You&#8217;ll be prompted to enter text, and GLM 4.5 will respond directly in the terminal.<\/p>\n<p><em>python inference\/trans_infer_cli.py<\/em><\/p>\n<p><span class=\"seethis_lik\"><strong>Note:<\/strong> Running GLM 4.5 locally gives you full control over inference, privacy, and customization.<\/span><\/p>\n<h3><strong>Why GLM 4.5 Is Not Compatible With Python 3.13?<\/strong><\/h3>\n<p>If you are trying to run GLM 4.5 or GLM 4.5 Air model on Python 3.13 version, it likely fail because it was released in 2012 and lacks async\/await syntax, missing critical libraries and syntax features.<\/p>\n<ul>\n<li>Zhipu AI\u2019s GLM 4.5 model comprises of higher libraries like transformers requires python above 3.8 and accelerate, vllm, sglang requires above 3.8 as well.<\/li>\n<li>According to Github vLLM officially supports Python 3.8, 3.9, 3.10, 3.11, and 3.12.<\/li>\n<li>Running pip install sglang in Python 3.13 leads to errors like: <em>TypeError: urlopen() got an unexpected keyword argument &#8216;cafile&#8217;<\/em><\/li>\n<\/ul>\n<h3><strong>What Developers Can Build Locally Using GLM 4.5 Model?<\/strong><\/h3>\n<p><img loading=\"lazy\" class=\"size-full wp-image-83147 aligncenter\" src=\"https:\/\/s3.amazonaws.com\/static.the-next-tech.com\/wp-content\/uploads\/2025\/08\/01175337\/GLM-4.5-preview.gif\" alt=\"GLM 4.5 preview\" width=\"1245\" height=\"530\" \/><\/p>\n<p>A range of model development is possible using intelligent GLM 4.5 model which are mentioned below.<\/p>\n<ul>\n<li><strong>Private Coding Assistants:<\/strong> Generate code snippets, debug functions, explain unfamiliar code, and conversational chatbot.<\/li>\n<li><strong>Content generation tool:<\/strong> Write SEO blogs, summarize news, and generate social media captions or ad copy.<\/li>\n<li><strong>Thinking Assistants:<\/strong> Extract action items from notes, expand bullet points into full paragraphs or rewrite text in different tones.<\/li>\n<\/ul>\n<h2><span class=\"ez-toc-section\" id=\"GLM_45_Benchmarks\"><\/span><strong>GLM 4.5 Benchmarks<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>The <strong>GLM series (General Language Model)<\/strong> at its early stage have gained widespread popularity, more than 700,000 developers use this model.<\/p>\n<p>Z.ai new model GLM 4.5 break the record in competition with prevailing LLMs. In overall performance, across 12 benchmarks including 3 for agentic tasks, 7 for reasoning, and 2 for coding \u2014 GLM 4.5 ranks 3rd.<\/p>\n<p><img loading=\"lazy\" class=\"size-full wp-image-83148 aligncenter\" src=\"https:\/\/s3.amazonaws.com\/static.the-next-tech.com\/wp-content\/uploads\/2025\/08\/01175418\/Benchmark-performance-for-GLM-4.5-and-GLM-4.5-Air.png\" alt=\"Benchmark performance for GLM 4.5 and GLM 4.5 Air\" width=\"1245\" height=\"530\" srcset=\"https:\/\/s3.amazonaws.com\/static.the-next-tech.com\/wp-content\/uploads\/2025\/08\/01175418\/Benchmark-performance-for-GLM-4.5-and-GLM-4.5-Air.png 1245w, https:\/\/s3.amazonaws.com\/static.the-next-tech.com\/wp-content\/uploads\/2025\/08\/01175418\/Benchmark-performance-for-GLM-4.5-and-GLM-4.5-Air-300x128.png 300w, https:\/\/s3.amazonaws.com\/static.the-next-tech.com\/wp-content\/uploads\/2025\/08\/01175418\/Benchmark-performance-for-GLM-4.5-and-GLM-4.5-Air-1024x436.png 1024w, https:\/\/s3.amazonaws.com\/static.the-next-tech.com\/wp-content\/uploads\/2025\/08\/01175418\/Benchmark-performance-for-GLM-4.5-and-GLM-4.5-Air-768x327.png 768w, https:\/\/s3.amazonaws.com\/static.the-next-tech.com\/wp-content\/uploads\/2025\/08\/01175418\/Benchmark-performance-for-GLM-4.5-and-GLM-4.5-Air-20x8.png 20w, https:\/\/s3.amazonaws.com\/static.the-next-tech.com\/wp-content\/uploads\/2025\/08\/01175418\/Benchmark-performance-for-GLM-4.5-and-GLM-4.5-Air-30x13.png 30w, https:\/\/s3.amazonaws.com\/static.the-next-tech.com\/wp-content\/uploads\/2025\/08\/01175418\/Benchmark-performance-for-GLM-4.5-and-GLM-4.5-Air-80x34.png 80w, https:\/\/s3.amazonaws.com\/static.the-next-tech.com\/wp-content\/uploads\/2025\/08\/01175418\/Benchmark-performance-for-GLM-4.5-and-GLM-4.5-Air-150x64.png 150w\" sizes=\"(max-width: 1245px) 100vw, 1245px\" \/><\/p>\n<p>The release of GLM 4.5 and GLM 4.5 Air will disrupt the industry likely with its record-breaking performance in the filed of coding, reasoning, and agentic tasks.<\/p>\n<p>\ud83d\udc49 <a href=\"https:\/\/chat.z.ai\/s\/a4c5c4b9-e495-4fd2-b2f8-e41910472807\" target=\"_blank\" rel=\"nofollow noopener\">Check this conversation with Zhipus AI\u2019s GLM 4.5 model<\/a><\/p>\n<h2><span class=\"ez-toc-section\" id=\"Frequently_Asked_Questions\"><\/span><strong>Frequently Asked Questions<\/strong><span class=\"ez-toc-section-end\"><\/span><\/h2>\n        <section class=\"sc_fs_faq sc_card\">\n            <div>\n\t\t\t\t<h4>Who developed GLM 4.5 model?<\/h4>                <div>\n\t\t\t\t\t                    <p>\n\t\t\t\t\t\tZhipu AI, a Chinese AI research company developed this model.                    <\/p>\n                <\/div>\n            <\/div>\n        <\/section>\n\t        <section class=\"sc_fs_faq sc_card\">\n            <div>\n\t\t\t\t<h4>Which is the best LLM model for coding?<\/h4>                <div>\n\t\t\t\t\t                    <p>\n\t\t\t\t\t\tGPT-4 and Claude 3 Opus are considered top performers in coding benchmarks. But, GLM 4.5 is giving tough competition at par.                    <\/p>\n                <\/div>\n            <\/div>\n        <\/section>\n\t        <section class=\"sc_fs_faq sc_card\">\n            <div>\n\t\t\t\t<h4>Who beats DeepSeek R1 and Kimi K2 model?<\/h4>                <div>\n\t\t\t\t\t                    <p>\n\t\t\t\t\t\tGLM 4.5 outperforms both DeepSeek R1 and Kimi K2 in several reasoning and agentic benchmarks.                    <\/p>\n                <\/div>\n            <\/div>\n        <\/section>\n\t        <section class=\"sc_fs_faq sc_card\">\n            <div>\n\t\t\t\t<h4>Can Zhipu AI GLM 4.5 work autonomously?<\/h4>                <div>\n\t\t\t\t\t                    <p>\n\t\t\t\t\t\tYes, GLM 4.5 supports agentic workflows and can perform tasks autonomously when integrated properly.                    <\/p>\n                <\/div>\n            <\/div>\n        <\/section>\n\t\n<script type=\"application\/ld+json\">\n    {\n        \"@context\": \"https:\/\/schema.org\",\n        \"@type\": \"FAQPage\",\n        \"mainEntity\": [\n                    {\n                \"@type\": \"Question\",\n                \"name\": \"Who developed GLM 4.5 model?\",\n                \"acceptedAnswer\": {\n                    \"@type\": \"Answer\",\n                    \"text\": \"Zhipu AI, a Chinese AI research company developed this model.\"\n                                    }\n            }\n            ,\t            {\n                \"@type\": \"Question\",\n                \"name\": \"Which is the best LLM model for coding?\",\n                \"acceptedAnswer\": {\n                    \"@type\": \"Answer\",\n                    \"text\": \"GPT-4 and Claude 3 Opus are considered top performers in coding benchmarks. But, GLM 4.5 is giving tough competition at par.\"\n                                    }\n            }\n            ,\t            {\n                \"@type\": \"Question\",\n                \"name\": \"Who beats DeepSeek R1 and Kimi K2 model?\",\n                \"acceptedAnswer\": {\n                    \"@type\": \"Answer\",\n                    \"text\": \"GLM 4.5 outperforms both DeepSeek R1 and Kimi K2 in several reasoning and agentic benchmarks.\"\n                                    }\n            }\n            ,\t            {\n                \"@type\": \"Question\",\n                \"name\": \"Can Zhipu AI GLM 4.5 work autonomously?\",\n                \"acceptedAnswer\": {\n                    \"@type\": \"Answer\",\n                    \"text\": \"Yes, GLM 4.5 supports agentic workflows and can perform tasks autonomously when integrated properly.\"\n                                    }\n            }\n            \t        ]\n    }\n<\/script>\n\n<p><span class=\"seethis_lik\"><strong>Disclaimer:<\/strong> The information written on this article is for education purposes only. We do not own them or are not partnered to these websites. For more information, read our <a href=\"https:\/\/www.the-next-tech.com\/terms-condition\/\" target=\"_blank\" rel=\"noopener\">terms and conditions<\/a>.<\/span><\/p>\n<p><span class=\"seethis_lik\"><strong>FYI:<\/strong> Explore more tips and tricks <a href=\"https:\/\/www.the-next-tech.com\/finance\/\" target=\"_blank\" rel=\"noopener\">here<\/a>. For more tech tips and quick solutions, follow our <a href=\"https:\/\/www.facebook.com\/TheNextTech2018\" target=\"_blank\" rel=\"noopener\">Facebook<\/a> page, for AI-driven insights and guides, follow our <a href=\"https:\/\/www.linkedin.com\/company\/the-next-tech\" target=\"_blank\" rel=\"noopener\">LinkedIn<\/a> page.<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Another gracious Large Language Model developed by Chinese AI company, Zhipu AI introduced two advanced model to date. The GLM<\/p>\n","protected":false},"author":5083,"featured_media":83149,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[43],"tags":[51450,51451,51452,47866,49575],"_links":{"self":[{"href":"https:\/\/www.the-next-tech.com\/rest\/wp\/v2\/posts\/83143"}],"collection":[{"href":"https:\/\/www.the-next-tech.com\/rest\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.the-next-tech.com\/rest\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.the-next-tech.com\/rest\/wp\/v2\/users\/5083"}],"replies":[{"embeddable":true,"href":"https:\/\/www.the-next-tech.com\/rest\/wp\/v2\/comments?post=83143"}],"version-history":[{"count":3,"href":"https:\/\/www.the-next-tech.com\/rest\/wp\/v2\/posts\/83143\/revisions"}],"predecessor-version":[{"id":83152,"href":"https:\/\/www.the-next-tech.com\/rest\/wp\/v2\/posts\/83143\/revisions\/83152"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.the-next-tech.com\/rest\/wp\/v2\/media\/83149"}],"wp:attachment":[{"href":"https:\/\/www.the-next-tech.com\/rest\/wp\/v2\/media?parent=83143"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.the-next-tech.com\/rest\/wp\/v2\/categories?post=83143"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.the-next-tech.com\/rest\/wp\/v2\/tags?post=83143"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}