DEV Community

NeuralLang
NeuralLang

Posted on

Neural v0.2.1: Macros, Fixes, and PyTorch Training Loop!

πŸš€ Neural v0.2.1: Macros, Fixes, and PyTorch Training Loop!

A new version of Neural is here! πŸŽ‰ This update introduces DSL Macros, major bug fixes, improvements in TensorFlow and PyTorch code generation, and an enhanced debugging experience. It is still very buggy and a WIP, but it shows progress! 🚧


πŸ”₯ New Features

πŸ—οΈ 1. Macros for the DSL with define Blocks

Macros now allow reusing predefined layer structures! Define once, use multiple times. Example:

define MyDense {
    Dense(units=128, activation="relu")
}

network ExampleNet {
    input: (28, 28)
    layers:
        MyDense
        Dropout(rate=0.5)
        Output(units=10, activation="softmax")
}
Enter fullscreen mode Exit fullscreen mode

βœ… Benefits:

  • Reduce redundancy in large models.
  • Maintain consistency across layers.
  • Simplify network definitions.

πŸ› οΈ Fixes and Enhancements

βœ… 2. TensorFlow Code Generation Fixes

Test failure: test_code_generator.py::test_generate_tensorflow_complex #68

  • Loss and optimizers now include their parameters.
  • Optimizer imports are now explicit (e.g., from tensorflow.keras.optimizers import Adam).
  • Model compilation consistency: Ensured correct formatting in model.compile().
  • Loss handling improvement: Properly extracts dictionary-based loss functions.

βœ… 3. Layer Multiplication Bug Fix

Test failure: test_code_generator.py::test_layer_multiplication #69

  • Fixed incorrect key: pop('multiply', 1) β†’ pop('*', 1).
  • Code now correctly counts layers using Dense(units=64).

πŸ‹οΈ PyTorch Improvements

πŸ”„ 4. PyTorch Training Loop

Added a basic PyTorch training loop using training_config. Example:

import torch
import torch.nn as nn
import torch.optim as optim

# Define model
model = MyNeuralModel()
optimizer = optim.Adam(model.parameters(), lr=0.001)
loss_fn = nn.CrossEntropyLoss()

def train_loop(dataloader, model, loss_fn, optimizer):
    for batch in dataloader:
        inputs, labels = batch
        optimizer.zero_grad()
        outputs = model(inputs)
        loss = loss_fn(outputs, labels)
        loss.backward()
        optimizer.step()
Enter fullscreen mode Exit fullscreen mode

Users must provide their own dataset (DataLoader), but this serves as a template.

πŸ“ 5. Improved Comments in Generated Code

  • More detailed inline comments for TensorFlow and PyTorch.
  • Easier debugging and learning.

πŸ” 6. Optimizer Configuration Extraction

  • Extracts optimizer_config['params'], defaults to lr=0.001.
  • Uses repr() for correct string/numeric value handling.

πŸ“œ 7. Logging Instead of Print Statements

Replaced print() with logger.warning() for unsupported PyTorch layers.


πŸ›  Macro Parsing & Error Fixes

🚨 8. Macro Parsing Fixes

Test failure: test_parser.py::test_macro_parsing[macro-basic]

  • Macros now store their layer definitions correctly.
  • Expands macros properly when referenced.
  • Supports both named and ordered parameters in macros.
  • Error messages improved for better debugging.

Example:

define MyDense {
    Dense(units=128, activation="relu")
}

network ExampleNet {
    layers:
        MyDense(units=256)  # Overrides default units
}
Enter fullscreen mode Exit fullscreen mode

πŸ”Ή Macros now allow parameter overrides!

πŸ›  9. Fixed Layer Tokenization Errors

Test failure: test_parser.py::test_layer_parsing[custom-shape]

  • Standard layer names like LSTM, GRU were mistakenly treated as macros.
  • Now explicitly defined in the grammar to prevent conflicts.

πŸ“œ Miscellaneous Improvements

πŸ“‘ 10. JSON Schema for Code Editors

  • Introduced neural-schema.json.
  • Provides syntax highlighting, autocompletion, and validation in code editors.

🎨 11. Dashboard Visualization Test Fixes

Test failure: test_dashboard.py::test_dashboard_visualization #72

  • Fixed page title assertion errors.
  • Cleaned up resources properly.

πŸ”„ 12. Nested Layer Configurations

  • Layers can now contain sub-layers using {}.
  • Used for complex architectures like Transformers and Residual Networks.

Example:

network NestedExample {
    layers:
        TransformerEncoder {
            SelfAttention(num_heads=8)
            FeedForward(hidden_dim=512)
        }
}
Enter fullscreen mode Exit fullscreen mode

βœ… Easier deep learning model structuring!


🏁 Conclusion

This update brings powerful macros, better error handling, PyTorch improvements, and key bug fixes. πŸš€

⚠️ Neural is still in an experimental state and very buggyβ€”this release is just to show progress!

πŸ“₯ Upgrade Now:

pip install --upgrade neural-dsl
Enter fullscreen mode Exit fullscreen mode

πŸ’¬ Feedback? Join the discussion!

πŸ”₯ Stay tuned for more improvements! Happy coding! πŸ§ πŸ’‘

Top comments (0)