This executable blog post is the third in a series related to machine learning and explores code generation from a 16 billion parameter large language model (LLM). After a brief look under the hood at the LLM structure and parameter allocation, we generate a variety of Python functions and make observations related to code quality and security. Similar to OpenAI’s ChatGPT, Google’s Bard, Meta’s LLaMa and GitHub’s Copilot, we will see some fantastic capabilities and a few intriguing misses. The results demonstrate that human expertise remains crucial, whether it be in engineering the prompt or in evaluating the generated code for suitability.
The Jupyter-based notebook can be found here
Alex Plaskett and McCaulay Hudson presented this talk at HITB AMS on the 20th April 2023. The talk showcased NCC Exploit Development Group (EDG) in Pwn2Own 2022 Toronto targeting all consumer routers (Netgear, TP-Link and Synology) from both a LAN and WAN perspective. The talk also described how we compromised…
NCC Group was selected to perform a security evaluation of Kubernetes 1.24.0 release in response to Kubernetes SIG Security’s Third-Party Security Audit Request for Proposals. The testing portion of the audit took place in May and June 2022. The global project team performed a security architectural design review that resulted…
In August 2022, Solana Foundation engaged NCC Group to conduct a security assessment of the ZK-Token SDK, a collection of open-source functions and types that implement the core cryptographic functionalities of the Solana Program Library (SPL) Confidential Token extension. These functionalities are homomorphic encryption and associated proofs used to demonstrate…