Neural DSL just hit v0.2.3, bringing a killer feature: hyperparameter optimization (HPO) that works across PyTorch and TensorFlow with a single declarative config. Define your model once—like Dense(HPO(choice(128, 256))) and Adam(learning_rate=HPO(log_range(1e-4, 1e-2)))—and run neural run --backend pytorch. No rewriting needed. It uses Optuna under the hood, with a unified training loop to evaluate trials.
This release also adds layers like LayerNormalization and Attention, fixes parser bugs, and improves validation (e.g., catching negative Conv2D filters). It’s still a WIP—expect bugs—but the cross-framework HPO feels like a game-changer for experimenting across ecosystems.
Details and source: https://github.com/Lemniscate-SHA-256/Neural/releases/tag/v0.... Thoughts on where this could go? Feedback welcome!
Neural DSL just hit v0.2.3, bringing a killer feature: hyperparameter optimization (HPO) that works across PyTorch and TensorFlow with a single declarative config. Define your model once—like Dense(HPO(choice(128, 256))) and Adam(learning_rate=HPO(log_range(1e-4, 1e-2)))—and run neural run --backend pytorch. No rewriting needed. It uses Optuna under the hood, with a unified training loop to evaluate trials. This release also adds layers like LayerNormalization and Attention, fixes parser bugs, and improves validation (e.g., catching negative Conv2D filters). It’s still a WIP—expect bugs—but the cross-framework HPO feels like a game-changer for experimenting across ecosystems. Details and source: https://github.com/Lemniscate-SHA-256/Neural/releases/tag/v0.... Thoughts on where this could go? Feedback welcome!