π Neural v0.2.1: Macros, Fixes, and PyTorch Training Loop!
A new version of Neural is here! π This update introduces DSL Macros, major bug fixes, improvements in TensorFlow and PyTorch code generation, and an enhanced debugging experience. It is still very buggy and a WIP, but it shows progress! π§
π₯ New Features
ποΈ 1. Macros for the DSL with define
Blocks
Macros now allow reusing predefined layer structures! Define once, use multiple times. Example:
define MyDense {
Dense(units=128, activation="relu")
}
network ExampleNet {
input: (28, 28)
layers:
MyDense
Dropout(rate=0.5)
Output(units=10, activation="softmax")
}
β Benefits:
- Reduce redundancy in large models.
- Maintain consistency across layers.
- Simplify network definitions.
π οΈ Fixes and Enhancements
β 2. TensorFlow Code Generation Fixes
Test failure: test_code_generator.py::test_generate_tensorflow_complex #68
- Loss and optimizers now include their parameters.
-
Optimizer imports are now explicit (e.g.,
from tensorflow.keras.optimizers import Adam
). -
Model compilation consistency: Ensured correct formatting in
model.compile()
. - Loss handling improvement: Properly extracts dictionary-based loss functions.
β 3. Layer Multiplication Bug Fix
Test failure: test_code_generator.py::test_layer_multiplication #69
- Fixed incorrect key:
pop('multiply', 1)
βpop('*', 1)
. - Code now correctly counts layers using
Dense(units=64)
.
ποΈ PyTorch Improvements
π 4. PyTorch Training Loop
Added a basic PyTorch training loop using training_config
. Example:
import torch
import torch.nn as nn
import torch.optim as optim
# Define model
model = MyNeuralModel()
optimizer = optim.Adam(model.parameters(), lr=0.001)
loss_fn = nn.CrossEntropyLoss()
def train_loop(dataloader, model, loss_fn, optimizer):
for batch in dataloader:
inputs, labels = batch
optimizer.zero_grad()
outputs = model(inputs)
loss = loss_fn(outputs, labels)
loss.backward()
optimizer.step()
Users must provide their own dataset (DataLoader), but this serves as a template.
π 5. Improved Comments in Generated Code
- More detailed inline comments for TensorFlow and PyTorch.
- Easier debugging and learning.
π 6. Optimizer Configuration Extraction
- Extracts
optimizer_config['params']
, defaults tolr=0.001
. - Uses
repr()
for correct string/numeric value handling.
π 7. Logging Instead of Print Statements
Replaced print()
with logger.warning()
for unsupported PyTorch layers.
π Macro Parsing & Error Fixes
π¨ 8. Macro Parsing Fixes
Test failure: test_parser.py::test_macro_parsing[macro-basic]
- Macros now store their layer definitions correctly.
- Expands macros properly when referenced.
- Supports both named and ordered parameters in macros.
- Error messages improved for better debugging.
Example:
define MyDense {
Dense(units=128, activation="relu")
}
network ExampleNet {
layers:
MyDense(units=256) # Overrides default units
}
πΉ Macros now allow parameter overrides!
π 9. Fixed Layer Tokenization Errors
Test failure: test_parser.py::test_layer_parsing[custom-shape]
- Standard layer names like
LSTM
,GRU
were mistakenly treated as macros. - Now explicitly defined in the grammar to prevent conflicts.
π Miscellaneous Improvements
π 10. JSON Schema for Code Editors
- Introduced
neural-schema.json
. - Provides syntax highlighting, autocompletion, and validation in code editors.
π¨ 11. Dashboard Visualization Test Fixes
Test failure: test_dashboard.py::test_dashboard_visualization #72
- Fixed page title assertion errors.
- Cleaned up resources properly.
π 12. Nested Layer Configurations
- Layers can now contain sub-layers using
{}
. - Used for complex architectures like Transformers and Residual Networks.
Example:
network NestedExample {
layers:
TransformerEncoder {
SelfAttention(num_heads=8)
FeedForward(hidden_dim=512)
}
}
β Easier deep learning model structuring!
π Conclusion
This update brings powerful macros, better error handling, PyTorch improvements, and key bug fixes. π
β οΈ Neural is still in an experimental state and very buggyβthis release is just to show progress!
π₯ Upgrade Now:
pip install --upgrade neural-dsl
π¬ Feedback? Join the discussion!
- Discord: Neural Community
- GitHub: Lemniscate-SHA-256/Neural
π₯ Stay tuned for more improvements! Happy coding! π§ π‘
Top comments (0)