Jac Language Reference#
Welcome to the official reference guide for the Jac programming language. This document is designed to serve as a comprehensive reference manual as well as a formal specification of the language. The mission of this guide is to be a resource for developers seeking to answer the question, "How do I code X in Jac?"
This document is organized around the formal grammar for the language code examples and corresponding grammar snippets being directly generated from the actual grammar and test cases maintained in the official repository. We expect the descriptions may occasionally lag behind the rapid evolution of Jac in the early days. If you notice something, make a pull request and join our contributor community.
Whitespace#
Jac uses curly braces to delimit code blocks rather than relying on indentation. As a result, varying indentation has no effect on execution order. Developers are free to format code as they see fit while still retaining Python-style readability:
Consistent formatting is still recommended, but the compiler treats whitespace as insignificant when determining program structure.Comments#
Single-line comments begin with #
and extend to the end of the line. Jac also
supports multiline comments delimited by #*
and *#
:
Base Module structure#
Code Example
Runnable Example in Jac and JacLib
"""A Docstring can be added the head of any module.
Any element in the module can also have a docstring.
If there is only one docstring before the first element,
it is assumed to be a module docstring.
"""
"""A docstring for add function"""
def add(a: int, b: int) -> int {
return a + b;
}
# No docstring for subtract function
def subtract(a: int, b: int) -> int {
return a - b;
}
with entry:__main__ {
print(add(1, subtract(3, 1)));
}
Jac Grammar Snippet
Description
In Jac, a module is analogous to a Python module, serving as a container for various elements such as functions, classes (referred to as "archetypes" later in this document), global variables, and other constructs that facilitate code organization and reusability. Each module begins with an optional module-level docstring, which provides a high-level overview of the module's purpose and functionality. This docstring, if present, is positioned at the very start of the module, before any other elements.
Docstrings
Jac adopts a stricter approach to docstring usage compared to Python. It mandates the inclusion of a single docstring at the module level and permits individual docstrings for each element within the module. This ensures that both the module itself and its constituent elements are adequately documented. If only one docstring precedes the first element, it is automatically designated as the module-level docstring.
Also Note, that Jac enforces type annotations in function signatures and class fields to promote type safety and ultimately more readable and scalable codebases.
Elements within a Jac module encompass familiar constructs from Python, including functions and classes, with the addition of some unique elements that will be discussed in further detail. Below is a table of module elements in Jac. These constructs are described in detail later in this document.
Module Item | Description |
---|---|
Import Statements | Same as python with slightly different syntax, works with both .jac and .py files (in addition to packages) |
Archetypes | Includes traditional python class construct with equiviant semantics, and additionaly introduces a number of new class-like constructs including obj , node , edge , and walker to enable the object-spatial programming paradigmn |
Function Abilities | Equivalent to traditional python function semantics with change of keyword def to can . Type hints are required in parameters and returns |
Object-Spatial Abilities | A function like construct that is triggered by types of node s or walker s in the object-spatial paradigm |
Free Floating Code | Construct (with entry {...} ) to express presence of free floating code within a module that is not part of a function or class-like object. Primarily for code cleanliness, readability, and maintainability. |
Global Variables | Module level construct to express global module level variables without using with entry syntax. (glob x=5 is equivalent to with entry {x=5;} ) |
Test | A language level construct for testing, functionality realized with test and check keywords. |
Inline Python | Native python code can be inlined alongside jac code at arbitrary locations in a Jac program using ::py:: directive |
Moreover, Jac requires that any standalone, module-level code be encapsulated within a with entry {}
block. This design choice aims to enhance the clarity and cleanliness of Jac codebase.
Import/Include Statements#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Jac provides flexible module importing capabilities that extend Python's import system with additional convenience features and compile-time source inclusion. The language supports three distinct import mechanisms for different use cases.
Standard Import#
Standard imports bring entire modules into the current namespace:
The compiler automatically detects module types: paths ending with .jac
are treated as Jac modules, while other paths are forwarded to Python's import machinery. The as
keyword provides aliasing functionality identical to Python's behavior.
Selective Import with Curly Braces#
Jac enables selective importing using curly brace syntax borrowed from ES modules:
This syntax avoids ambiguity in comma-separated import lists while maintaining visual consistency with Jac's block delimiters. The curly brace notation clearly distinguishes between multiple import paths and multiple items from a single path.
Include Statement#
The include
statement imports all exported symbols from the target module into the current namespace:
Include operations are functionally equivalent to Python's from target import *
syntax, bringing all public symbols from the target module into the current scope. This mechanism is particularly useful for importing helper functions from Python files or accessing all symbols from Jac modules without explicit enumeration, while maintaining clean namespace organization.
Module Resolution#
Jac follows a systematic approach to module resolution:
- Relative paths are resolved relative to the current file's location
- Absolute module names are searched first in
JAC_PATH
, then in Python'ssys.path
- Module caching ensures each file is processed only once per build cycle
Interface and Implementation Separation#
Jac's impl
keyword enables separation of interface declarations from implementation details. Import statements bring only the interface unless the implementation file is found on the module path, supporting lightweight static analysis and efficient incremental builds.
Integration with Object-Spatial Programming#
Import statements work seamlessly with Jac's object-spatial constructs, enabling modular organization of walkers, nodes, and edges across multiple files while maintaining the topological relationships essential for graph-based computation.
import from graph_utils { PathFinder, DataNode };
import spatial_algorithms as algorithms;
walker MyWalker {
can traverse with entry {
finder = PathFinder();
path = finder.find_path(here, target);
visit path;
}
}
Organized imports enable the Jac compiler to analyze dependencies effectively and optimize distributed execution graphs for object-spatial operations.
Application Bundling#
Jac programs can be packaged together with their dependencies into a single deployable bundle using the toolchain. Bundling resolves imports at build time and embeds module contents so that applications can be distributed without external file dependencies.
Archetypes#
Code Example
Runnable Example in Jac and JacLib
def print_base_classes(cls: type) -> type {
print(
f"Base classes of {cls.__name__}: {[c.__name__ for c in cls.__bases__]}"
);
return cls;
}
class Animal {}
obj Domesticated {}
@print_base_classes
node Pet(Animal, Domesticated) {}
walker Person(Animal) {}
walker Feeder(Person) {}
@print_base_classes
walker Zoologist(Feeder) {}
async walker MyWalker {}
Jac Grammar Snippet
Description
Archetypes represent Jac's extension of traditional object-oriented programming classes, providing specialized constructs that enable object-spatial programming. Each archetype type serves a distinct role in building topological computational systems where data and computation are distributed across graph structures.
Archetype Types#
Jac defines five archetype categories that form the foundation of object-spatial programming:
Object (obj
): Standard object archetypes that represents tradtional OOP class semantics. Objects serve as the base type from which nodes, walkers, and edges inherit, ensuring compatibility with object-spatial programming patterns.
Node (node
): Specialized archetypes that represent discrete locations within topological structures. Nodes can store data, host computational abilities, and connect to other nodes through edges, forming the spatial foundation for graph-based computation.
Walker (walker
): Mobile computational entities that traverse node-edge structures, carrying algorithmic behaviors and state throughout the topological space. Walkers embody the "computation moving to data" paradigm central to object-spatial programming.
Edge (edge
): First-class relationship archetypes that connect nodes while providing their own computational capabilities. Edges represent both connectivity and transition-specific behaviors within the graph structure.
Class (class
): Python-compatible class archetypes that faithfully follow Python's class syntax and semantics. Unlike other archetypes, classes require explicit self
parameters in methods and do not support the has
keyword for property declarations. They provide full compatibility with Python's object-oriented programming model.
Implementation Details#
From an implementation standpoint, the four object-spatial archetypes (obj
, node
, walker
, edge
) behave similarly to Python dataclasses. Their constructor semantics and initialization rules mirror the automated constructors that Python generates for dataclasses, providing automatic initialization of has
variables and proper handling of inheritance hierarchies.
Class vs Object-Spatial Archetypes#
The class
archetype provides Python-compatible class definitions, while the semantics for other archetypes are inspired by dataclass-like behavior:
# Python-compatible class archetype
class PythonStyleClass {
def init(self: PythonStyleClass, value: int) {
self.value = value;
}
def increment(self, amount: int) {
self.value += amount;
return self.value;
}
}
# Jac's obj with automated constructor semantics
obj DataSpatialObject {
has value: int; # Automatically included in constructor
can increment(amount: int) {
self.value += amount;
return self.value;
}
}
Note that class
archetypes require explicit self
parameters and manual constructor definition, while object-spatial archetypes automatically generate constructors based on has
declarations.
Constructor Rules and Has Variables#
Data spatial archetypes (obj
, node
, walker
, edge
) automatically generate constructors based on their has
variable declarations, following rules similar to Python dataclasses:
obj Person {
has name: str;
has age: int = 0; # Default value
has id: str by postinit;
can postinit {
# Called after automatic initialization
self.id = f"{self.name}_{self.age}";
}
}
# Constructor automatically accepts name and age parameters
person = Person(name="Alice", age=30);
# After construction, postinit runs to set id = "Alice_30"
Constructor Generation Rules:
- All has
variables without default values become required constructor parameters
- Variables with default values become optional parameters
- Parameters are accepted in declaration order
- The postinit
method runs after all has
variables are initialized
Post-initialization Hook:
The postinit
method mirrors Python's __post_init__
semantics:
- Executes automatically after the generated constructor completes
- Has access to all initialized has
variables
- Useful for derived attributes, validation, or complex initialization logic
- Cannot modify the constructor signature
node DataNode {
has raw_data: list;
has processed: bool = False;
has stats: dict by postinit;
can postinit {
# Compute derived data after construction
self.stats = {
"count": len(self.raw_data),
"types": set(type(x) for x in self.raw_data)
};
self.processed = True;
}
}
Inheritance and Composition#
Archetypes support multiple inheritance, enabling complex type hierarchies that reflect real-world relationships:
obj Animal;
obj Domesticated;
node Pet(Animal, Domesticated) {
has name: str;
has species: str;
}
walker Caretaker(Person) {
can feed with Pet entry {
print(f"Feeding {here.name} the {here.species}");
}
}
The inheritance syntax (ParentType1, ParentType2)
allows archetypes to combine behaviors from multiple sources, supporting rich compositional patterns.
Decorators and Metaprogramming#
Decorators provide metaprogramming capabilities that enhance archetype behavior without modifying core definitions:
@print_base_classes
node EnhancedPet(Animal, Domesticated) {
has enhanced_features: list;
}
@performance_monitor
walker OptimizedProcessor {
can process with entry {
# Processing logic with automatic performance tracking
analyze_data(here.data);
}
}
Decorators enable cross-cutting concerns like logging, performance monitoring, and validation to be applied declaratively across archetype definitions.
Access Control#
Archetypes support access modifiers that control visibility and encapsulation:
node :pub DataNode {
has :priv internal_state: dict;
has :pub public_data: any;
can :protect process_internal with visitor entry {
# Protected processing method
self.internal_state.update(visitor.get_updates());
}
}
Access modifiers (:pub
, :priv
, :protect
) enable proper encapsulation while supporting the collaborative nature of object-spatial computation.
Object-Spatial Integration#
Archetypes work together to create complete object-spatial systems:
node DataSource {
has data: list;
can provide_data with walker entry {
visitor.receive_data(self.data);
}
}
edge DataFlow(DataSource, DataProcessor) {
can transfer with walker entry {
# Edge-specific transfer logic
transformed_data = self.transform(visitor.data);
visitor.update_data(transformed_data);
}
}
walker DataCollector {
has collected: list = [];
can collect with DataSource entry {
here.provide_data();
visit [-->]; # Continue to connected nodes
}
}
This integration enables sophisticated graph-based algorithms where computation flows naturally through topological structures, with each archetype type contributing its specialized capabilities to the overall system behavior.
Archetypes provide the foundational abstractions that make object-spatial programming both expressive and maintainable, enabling developers to model complex systems as interconnected computational topologies.
Async Walker#
Async walkers extend the walker archetype with asynchronous capabilities:
import time;
import asyncio;
import from typing {Coroutine}
node A {
has val: int;
}
async walker W {
has num: int;
async can do1 with A entry {
print("A Entry action ", here.val);
visit [here-->];
}
}
with entry {
root ++> (a1 := A(1)) ++> [a2 := A(2), a3 := A(3), a4 := A(4)];
w1 = W(8);
async def foo(w:W, a:A)-> None {
print("Let's start the task");
x = w spawn a;
print("It is Coroutine task", isinstance(x, Coroutine));
await x;
print("Coroutine task is completed");
}
asyncio.run(foo(w1,a1));
}
Async walkers provide significant advantages for modern object-spatial applications by enabling concurrent execution where multiple async walkers can traverse different graph regions simultaneously, improving overall system throughput. They excel at handling non-blocking I/O operations, ensuring that network requests, file operations, and database queries don't block the traversal of other graph paths. This seamless asyncio integration provides full compatibility with Python's rich async ecosystem, allowing developers to leverage existing async libraries and frameworks within their object-spatial programs. The asynchronous nature also leads to superior resource efficiency through better utilization of system resources during I/O operations, as the system can continue processing other graph nodes while waiting for slow operations to complete.
Archetype bodies#
Code Example
Runnable Example in Jac and JacLib
obj Car {
has make: str,
model: str,
year: int;
static has wheels: int = 4;
def display_car_info {
print(f"Car Info: {self.year} {self.make} {self.model}");
}
static def get_wheels -> int {
return Car.wheels;
}
}
with entry {
car = Car("Toyota", "Camry", 2020);
car.display_car_info();
print("Number of wheels:", Car.get_wheels());
}
Jac Grammar Snippet
Description
Archetype bodies define the internal structure and behavior of Jac's specialized class constructs. These bodies contain member declarations, abilities, and implementation details that enable both traditional object-oriented programming and object-spatial computation patterns.
Member Declaration Syntax#
Archetype members are declared using the has
keyword with mandatory type annotations:
The has
keyword establishes instance variables with explicit type constraints, while static has
creates class-level variables shared across all instances.
Instance and Static Members#
Instance Members: Declared with has
, these variables belong to individual archetype instances and maintain separate state for each object.
Static Members: Declared with static has
, these variables belong to the archetype class itself and are shared across all instances, providing class-level data storage.
Ability Definitions#
Abilities within archetype bodies define both traditional methods and object-spatial behaviors:
obj DataProcessor {
has data: list;
can process_data(self) -> dict {
# Traditional method implementation
return {"processed": len(self.data), "status": "complete"};
}
can validate with entry {
# Data spatial ability triggered by events
if (not self.data) {
raise ValueError("No data to process");
}
}
}
Access Control Modifiers#
Archetype bodies support access control for encapsulation:
obj SecureContainer {
has :pub public_data: str;
has :priv private_data: str;
has :protect protected_data: str;
can :pub get_public_info(self) -> str {
return self.public_data;
}
can :priv internal_process(self) {
# Private method for internal use
self.protected_data = "processed";
}
}
Access modifiers (:pub
, :priv
, :protect
) control visibility and access patterns across module boundaries.
Object-Spatial Archetype Bodies#
Data spatial archetypes include specialized members and abilities:
node DataNode {
has data: dict;
has processed: bool = false;
has connections: int = 0;
can process_incoming with visitor entry {
# Triggered when walker enters this node
print(f"Processing visitor {visitor.id} at node {self.id}");
self.processed = true;
visitor.record_visit(self);
}
can cleanup with visitor exit {
# Triggered when walker leaves this node
self.connections += 1;
print(f"Visitor departed, total connections: {self.connections}");
}
}
walker DataCollector {
has collected: list = [];
has visit_count: int = 0;
can collect with DataNode entry {
# Triggered when entering DataNode instances
self.collected.append(here.data);
self.visit_count += 1;
}
can record_visit(self, node: DataNode) {
# Traditional method callable by nodes
print(f"Recorded visit to node {node.id}");
}
}
edge DataFlow(DataNode, DataNode) {
has flow_rate: float;
has capacity: int;
can regulate_flow with visitor entry {
# Triggered when walker traverses this edge
if (visitor.data_size > self.capacity) {
visitor.compress_data();
}
}
}
Constructor Patterns#
Archetype bodies can include initialization logic:
obj ConfigurableProcessor {
has config: dict;
has initialized: bool = false;
can init(self, config_data: dict) {
# Constructor-like initialization
self.config = config_data;
self.initialized = true;
self.validate_config();
}
can validate_config(self) {
# Private validation method
required_keys = ["input_format", "output_format"];
for key in required_keys {
if (key not in self.config) {
raise ValueError(f"Missing required config: {key}");
}
}
}
}
Method Overriding and Inheritance#
Archetype bodies support inheritance patterns:
obj BaseProcessor {
has name: str;
can process(self, data: any) -> any {
# Base implementation
return data;
}
can get_info(self) -> str {
return f"Processor: {self.name}";
}
}
obj AdvancedProcessor(BaseProcessor) {
has advanced_features: list;
can process(self, data: any) -> any {
# Override base implementation
enhanced_data = self.enhance_data(data);
return super().process(enhanced_data);
}
can enhance_data(self, data: any) -> any {
# Additional processing logic
return {"enhanced": data, "features": self.advanced_features};
}
}
Integration with Implementation Blocks#
Archetype bodies can be separated from their implementations:
obj Calculator {
has precision: int = 2;
# Method declarations
can add(self, a: float, b: float) -> float;
can multiply(self, a: float, b: float) -> float;
}
impl Calculator {
can add(self, a: float, b: float) -> float {
result = a + b;
return round(result, self.precision);
}
can multiply(self, a: float, b: float) -> float {
result = a * b;
return round(result, self.precision);
}
}
Documentation and Metadata#
Archetype bodies can include documentation strings:
obj DocumentedClass {
"""
A well-documented archetype that demonstrates
proper documentation practices in Jac.
"""
has value: int;
can get_value(self) -> int {
"""Returns the current value."""
return self.value;
}
can set_value(self, new_value: int) {
"""Sets a new value with validation."""
if (new_value < 0) {
raise ValueError("Value must be non-negative");
}
self.value = new_value;
}
}
Archetype bodies provide the structural foundation for Jac's object-oriented and object-spatial programming capabilities, enabling developers to create sophisticated, well-encapsulated components that support both traditional programming patterns and innovative topological computation models.
Enumerations#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Jac provides native enumeration support through the enum
construct, offering ordered sets of named constants with integrated access control and implementation capabilities. Enumerations behave similarly to Python's enum.Enum
while supporting Jac's archetype system and object-spatial programming features.
Basic Enumeration Declaration#
Enumeration values automatically increment from the previous value when omitted. Trailing commas are permitted, and enum names follow standard identifier rules consistent with other Jac archetypes.
Access Control#
Enumerations support access modifiers to control visibility across module boundaries:
enum :protect Role {
ADMIN = "admin",
USER = "user",
}
enum :pub Status {
ACTIVE,
INACTIVE,
PENDING
}
Access modifiers (:priv
, :protect
, :pub
) determine whether enumerations can be accessed from external modules, enabling proper encapsulation of enumerated constants.
Member Properties#
Enumeration members expose standard properties for introspection:
These properties provide runtime access to both the symbolic name and underlying value of enumeration members, supporting dynamic enumeration processing.
Implementation Blocks#
Enumerations can include additional behavior through implementation blocks, separating declaration from logic:
enum Day;
impl Day {
MON = 1,
TUE = 2,
WED = 3,
THU = 4,
FRI = 5,
SAT = 6,
SUN = 7,
def is_weekend(self) -> bool {
return self in [Day.SAT, Day.SUN];
}
def next_day(self) -> Day {
return Day((self.value % 7) + 1);
}
}
Implementation blocks enable enumerations to contain methods and computed properties while maintaining clean separation between constant definitions and behavioral logic.
Integration with Decorators#
Enumerations support Python decorators for additional functionality:
The @unique
decorator ensures all enumeration values are distinct, preventing accidental duplicate assignments.
Usage in Object-Spatial Contexts#
Enumerations integrate seamlessly with object-spatial programming constructs:
enum NodeType {
DATA,
PROCESSING,
STORAGE
}
node TypedNode {
has node_type: NodeType;
can process with visitor entry {
if (self.node_type == NodeType.PROCESSING) {
# Perform processing logic
result = process_data(visitor.data);
visitor.set_result(result);
}
}
}
Enumerations provide type-safe constants that enhance code clarity and maintainability in both traditional programming contexts and object-spatial graph operations.
Functions and Abilities#
Code Example
Runnable Example in Jac and JacLib
obj Divider {
def divide(x: float, y: float) -> float {
return (x / y);
}
}
#this is an abstract class as it has the abstract method
obj Calculator {
static def:priv multiply(a: float, b: float) -> float {
return a * b;
}
def substract -> float abs;
def add(number: float, *a: tuple) -> float;
}
obj Substractor(Calculator) {
def substract(x: float, y: float) -> float {
return (x - y);
}
}
impl Calculator.add
(number: float, *a: tuple) -> float {
return (number * sum(a));
}
with entry {
div = Divider();
sub = Substractor();
print(div.divide(55, 11));
print(Calculator.multiply(9, -2));
print(sub.add(5, 20, 34, 56));
print(sub.substract(9, -2));
}
Jac Grammar Snippet
Description
Jac provides two complementary approaches to defining executable code: traditional functions using def
and object-spatial abilities using can
. This dual system supports both conventional programming patterns and the unique requirements of computation moving through topological structures.
Omission of Gratuitous self
#
Unlike Python, Jac methods of obj
, node
, edge
, and walker
do not require a self
parameter unless it is
actually used. Instance methods implicitly receive the current object, reducing
boilerplate and keeping signatures focused on relevant parameters.
Function Definitions#
Traditional functions use the def
keyword with mandatory type annotations:
def calculate_distance(x1: float, y1: float, x2: float, y2: float) -> float {
return ((x2 - x1) ** 2 + (y2 - y1) ** 2) ** 0.5;
}
Functions provide explicit parameter passing and return value semantics, making them suitable for stateless computations and utility operations.
Abilities#
Abilities represent Jac's distinctive approach to defining behaviors that respond to object-spatial events:
walker PathFinder {
can explore with node entry {
# Ability triggered when walker enters any node
print(f"Exploring node: {here.name}");
visit [-->]; # Continue to connected nodes
}
can process with DataNode exit {
# Ability triggered when leaving DataNode instances
print(f"Finished processing {here.data}");
}
}
Abilities execute implicitly based on spatial events rather than explicit invocation, embodying the object-spatial programming paradigm.
Access Control#
Both functions and abilities support access modifiers for encapsulation:
obj Calculator {
def :pub add(a: float, b: float) -> float {
return a + b;
}
def :priv internal_compute(data: list) -> float {
return sum(data) / len(data);
}
can :protect validate with entry {
# Protected ability for internal validation
if (not self.is_valid()) {
raise ValueError("Invalid calculator state");
}
}
}
Static Methods#
Static methods operate at the class level without requiring instance context:
obj MathUtils {
static def multiply(a: float, b: float) -> float {
return a * b;
}
static def factorial(n: int) -> int {
return 1 if n <= 1 else n * MathUtils.factorial(n - 1);
}
}
Abstract Declarations#
Abstract methods define interfaces that must be implemented by subclasses:
obj Shape {
def area() -> float abs;
def perimeter() -> float abs;
}
obj Rectangle(Shape) {
has width: float;
has height: float;
def area() -> float {
return self.width * self.height;
}
def perimeter() -> float {
return 2 * (self.width + self.height);
}
}
Implementation Separation#
Jac enables separation of declarations from implementations using impl
blocks:
obj DataProcessor {
def process_data(data: list) -> dict;
}
impl DataProcessor {
def process_data(data: list) -> dict {
return {
"count": len(data),
"sum": sum(data),
"average": sum(data) / len(data)
};
}
}
Object-Spatial Integration#
Abilities integrate seamlessly with object-spatial constructs, enabling sophisticated graph algorithms:
node DataNode {
has data: dict;
has processed: bool = false;
can validate with visitor entry {
# Node ability triggered by walker visits
if (not self.data) {
visitor.report_error(f"Empty data at {self.id}");
}
}
can mark_complete with visitor exit {
# Mark processing complete when walker leaves
self.processed = true;
}
}
walker DataValidator {
has errors: list = [];
can report_error(message: str) {
self.errors.append(message);
}
can validate_graph with entry {
# Start validation process
visit [-->*]; # Visit all reachable nodes
}
}
Parameter Patterns#
Functions and abilities support flexible parameter patterns:
def flexible_function(required: int, optional: str = "default", *args: tuple, **kwargs: dict) -> any {
return {
"required": required,
"optional": optional,
"args": args,
"kwargs": kwargs
};
}
Asynchronous Operations#
Both functions and abilities support asynchronous execution:
async def fetch_data(url: str) -> dict {
# Asynchronous data fetching
response = await http_client.get(url);
return response.json();
}
walker AsyncProcessor {
async can process with entry {
# Asynchronous ability execution
data = await fetch_data(here.data_url);
here.update_data(data);
}
}
Functions and abilities together provide a comprehensive system for organizing computational logic that supports both traditional programming patterns and the innovative object-spatial paradigm where computation flows through topological structures.
Implementations#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Implementations in Jac provide a powerful mechanism for separating interface declarations from their concrete implementations. This feature supports modular programming, interface segregation, and flexible code organization patterns common in modern software development.
Implementation Concept#
Jac-lang offers a unique feature which allows developers to separate the functional declaration of code from their implementation. This facilitates cleaner code organization without requiring manual imports.
The impl
keyword (or the :type:name
syntax) allows you to define the concrete implementation of previously declared interfaces, including:
- Function implementations: Providing bodies for declared function signatures
- Object implementations: Adding members and behavior to declared objects
- Enumeration implementations: Defining the values and structure of enums
- Test implementations: Defining test cases separately from main code
Comparison with Traditional Approaches#
Usually when coding with Python, the body of a function or method is coded right after the function/method declaration as shown in the following Python code snippet:
from enum import Enum
def foo() -> str:
return "Hello"
class vehicle:
def __init__(self) -> None:
self.name = "Car"
class Size(Enum):
Small = 1
Medium = 2
Large = 3
car = vehicle()
print(foo())
print(car.name)
print(Size.Medium.value)
However, Jac-lang offers novel language features which allow programmers to organize their code effortlessly by separating declarations from implementations.
Function Implementations#
Functions can be declared with just their signature and implemented separately using two different syntaxes:
Modern impl
Syntax#
Declaration:
Implementation:
Legacy Colon Syntax#
Declaration:
Implementation:
This separation enables: - Interface definition: Clearly specify what functions are available - Deferred implementation: Implement functionality when convenient - Multiple implementations: Different implementations for different contexts
Object Implementations#
Objects can be declared as empty shells and have their structure defined later:
Modern impl
Syntax#
Declaration:
Implementation:
Legacy Colon Syntax#
Declaration:
Implementation:
This allows for: - Progressive definition: Build object structure incrementally - Modular design: Separate interface from implementation concerns - Flexible organization: Organize code based on logical groupings
Enumeration Implementations#
Enumerations can be declared and have their values specified in implementations:
Modern impl
Syntax#
Declaration:
Implementation:
Legacy Colon Syntax#
Declaration:
Implementation:
Test Implementations#
Tests can also be declared and implemented separately:
Declaration:
Implementation:
Complete Example#
Here's a complete example showing declarations and their usage:
can foo() -> str;
obj vehicle;
enum Size;
test check_vehicle;
with entry {
car = vehicle();
print(foo());
print(car.name);
print(Size.Medium.value);
}
File Organization Strategies#
There are multiple locations where implementations can be organized for optimal code management:
Same .jac
File as Declaration#
The implementations can be held in the same file as the declaration. This improves code organization visually during declaration while keeping everything in one place:
can foo() -> str;
obj vehicle;
impl foo() -> str {
return "Hello";
}
impl vehicle {
has name: str = "Car";
}
Separate Implementation Files#
Using .impl.jac
and .test.jac
Files#
For better codebase management, implementations can be separated into dedicated files living in the same directory as the main module, named as <main_module_name>.impl.jac
and <main_module_name>.test.jac
. Including or importing these files is not required - they are automatically discovered.
File structure:
main.jac:
can foo() -> str;
obj vehicle;
enum Size;
test check_vehicle;
with entry {
car = vehicle();
print(foo());
print(car.name);
print(Size.Medium.value);
}
main.impl.jac:
:can:foo() -> str {
return "Hello";
}
:obj:vehicle {
has name: str = "Car";
}
:enum:Size {
Small = 1,
Medium = 2,
Large = 3
}
main.test.jac:
Using .impl
and .test
Folders#
For even better organization, implementations can be organized within individual .impl
and .test
folders named as <main_module_name>.impl
and <main_module_name>.test
.
Inside these folders, implementations can be broken down into multiple files as per the programmer's preference, as long as each file has the .impl.jac
or .test.jac
suffixes.
File structure:
base
├── main.jac
│
├── main.impl
│ ├── foo.impl.jac
│ ├── vehicle.impl.jac
│ └── size.impl.jac
│
└── main.test
└── check_vehicle.test.jac
main.impl/foo.impl.jac:
main.impl/vehicle.impl.jac:
main.impl/size.impl.jac:
main.test/check_vehicle.test.jac:
These file separation features in Jac-lang allow programmers to organize their code seamlessly without any extra include
or import
statements.
Benefits of Implementation Separation#
-
Interface Clarity: Clean separation between what is available (interface) and how it works (implementation)
-
Code Organization: Group related implementations together regardless of where interfaces are declared
-
Modularity: Implement different parts of a system in separate modules or files
-
Testing: Mock implementations can be provided for testing purposes, and tests can be organized separately
-
Flexibility: Switch between different implementations based on requirements
-
Team Collaboration: Different team members can work on interfaces and implementations independently
-
Progressive Development: Define interfaces early and implement them as development progresses
Implementation Requirements#
- Signature Matching: Implementation must exactly match the declared signature
- Type Compatibility: Return types and parameter types must be consistent
- Completeness: All declared interfaces must eventually have implementations
- File Organization: Implementation files are automatically discovered when following naming conventions
Note: Even if the specific suffixes described above are not used for separated files and folders, the separated code bodies can still live in separate files and folders as long as they are explicitly included in the main module.
Implementations provide a robust foundation for building scalable, maintainable Jac applications with clear architectural boundaries and flexible code organization strategies.
Semstrings#
Code Example
Runnable Example in Jac and JacLib
import from mtllm.llms { FakeLLM }
glob llm = FakeLLM(default="[Output] R8@jL3pQ");
def generate_password() -> str byllm();
sem generate_password= """\
Generates and returns password that:
- contain at least 8 characters
- contain at least one uppercase letter
- contain at least one lowercase letter
- contain at least one digit
- contain at least one special character
""";
with entry {
password = generate_password();
print('Generated password:', password);
}
Jac Grammar Snippet
Description
Semantic Strings in Jac provide a powerful mechanism for enriching code with natural language descriptions that can be leveraged by Large Language Models (LLMs) for intelligent code generation and execution. This feature enables developers to create AI-powered functions and provide semantic context for code elements, facilitating more intuitive and intelligent programming patterns.
Semantic String Concept#
Jac-lang offers a unique feature called semantic strings (semstrings) that allows developers to associate natural language descriptions with code elements. These descriptions serve as instructions or context for LLMs, enabling AI-powered code execution and intelligent behavior generation.
The sem
keyword allows you to define semantic descriptions for:
- Function behavior: Detailed instructions for what a function should do
- Object properties: Descriptions of class attributes and their purposes
- Method parameters: Context for function arguments and their expected values
- Enumeration values: Semantic meaning of enum constants
- Nested structures: Hierarchical descriptions for complex objects
Comparison with Traditional Approaches#
Traditional programming relies on explicit implementations and comments for documentation:
def generate_password():
"""
Generates a secure password with specific requirements.
This is just documentation - the implementation must be written manually.
"""
import random
import string
# Manual implementation required
characters = string.ascii_letters + string.digits + "!@#$%^&*"
password = ''.join(random.choice(characters) for _ in range(12))
return password
Jac's semantic strings enable AI-powered function execution without manual implementation:
import from mtllm.llms {OpenAI}
glob llm = OpenAI(model_name="gpt-4o");
def generate_password() -> str by llm();
sem generate_password = """\
Generates and returns password that:
- contain at least 8 characters
- contain at least one uppercase letter
- contain at least one lowercase letter
- contain at least one digit
- contain at least one special character
""";
Function Semantic Strings#
Functions can be enhanced with semantic strings that provide detailed instructions for LLM execution:
Basic Function with Semantic String:
def generate_specific_number() -> int by llm();
sem generate_specific_number = "Generates a specific number that is 120597 and returns it.";
Complex Function with Detailed Instructions:
def generate_password() -> str by llm();
sem generate_password = """\
Generates and returns password that:
- contain at least 8 characters
- contain at least one uppercase letter
- contain at least one lowercase letter
- contain at least one digit
- contain at least one special character
""";
The by llm()
syntax indicates that the function should be executed by the configured LLM using the semantic string as instructions.
Object and Property Semantic Strings#
Objects and their properties can be described semantically for better AI understanding:
Object Description:
obj Person {
has name: str;
has yob: int;
def calc_age(year: int) -> int {
return year - self.yob;
}
}
sem Person = "A class representing a person.";
sem Person.name = "The name of the person.";
sem Person.yob = "The year of birth of the person.";
Method and Parameter Descriptions:
sem Person.calc_age = "Calculate the age of the person.";
sem Person.calc_age.year = "The year to calculate the age against.";
Nested Object Semantic Strings#
Semantic strings support hierarchical descriptions for complex nested structures:
obj OuterClass {
obj InnerClass {
has inner_value: str;
}
}
sem OuterClass = "A class containing an inner class.";
sem OuterClass.InnerClass = "An inner class within OuterClass.";
sem OuterClass.InnerClass.inner_value = "A value specific to the inner class.";
Enumeration Semantic Strings#
Enumerations can have semantic descriptions for both the enum itself and individual values:
enum Size {
Small = 1,
Medium = 2,
Large = 3
}
sem Size = "An enumeration representing different sizes.";
sem Size.Small = "The smallest size option.";
sem Size.Medium = "The medium size option.";
sem Size.Large = "The largest size option.";
LLM Integration#
Semantic strings work in conjunction with LLM configurations to enable AI-powered execution:
LLM Configuration:
Function with LLM Execution:
def generate_password() -> str by llm();
sem generate_password = """\
Generates and returns password that:
- contain at least 8 characters
- contain at least one uppercase letter
- contain at least one lowercase letter
- contain at least one digit
- contain at least one special character
""";
LLM Method Parameters:
def analyze_sentiment(text: str) -> str by llm(method="Chain-of-Thoughts");
sem analyze_sentiment = "Analyze the sentiment of the given text and return positive, negative, or neutral.";
Complete Example#
Here's a comprehensive example demonstrating various semantic string applications:
import from mtllm.llms {OpenAI}
glob llm = OpenAI(model_name="gpt-4o");
# AI-powered functions
def generate_password() -> str by llm();
def generate_email() -> str by llm();
def analyze_text(content: str) -> dict by llm();
# Object with semantic descriptions
obj User {
has username: str;
has email: str;
has created_at: str;
def validate_credentials(password: str) -> bool by llm();
}
# Semantic string definitions
sem generate_password = """\
Generates and returns a secure password that:
- contains at least 8 characters
- contains at least one uppercase letter
- contains at least one lowercase letter
- contains at least one digit
- contains at least one special character
""";
sem generate_email = "Generates a realistic email address for testing purposes.";
sem analyze_text = "Analyzes the given text content and returns a dictionary with sentiment, key topics, and summary.";
sem analyze_text.content = "The text content to be analyzed for sentiment and topics.";
sem User = "A class representing a user account in the system.";
sem User.username = "The unique username for the user account.";
sem User.email = "The email address associated with the user account.";
sem User.created_at = "The timestamp when the user account was created.";
sem User.validate_credentials = "Validates if the provided password meets security requirements.";
sem User.validate_credentials.password = "The password to be validated against security criteria.";
with entry {
# Use AI-powered functions
password = generate_password();
email = generate_email();
print("Generated password:", password);
print("Generated email:", email);
# Create user with AI validation
user = User(username="testuser", email=email, created_at="2025-06-17");
is_valid = user.validate_credentials(password);
print("Password is valid:", is_valid);
}
File Organization for Semantic Strings#
Like implementations, semantic strings can be organized in multiple ways:
Same File Organization#
Semantic strings can be defined in the same file as the code:
def generate_password() -> str by llm();
sem generate_password = "Generates a secure password with specific requirements.";
Separate Semantic Files#
For better organization, semantic strings can be separated into dedicated files:
File structure:
main.jac:
import from mtllm.llms {OpenAI}
glob llm = OpenAI(model_name="gpt-4o");
def generate_password() -> str by llm();
obj User {
has name: str;
has email: str;
}
with entry {
password = generate_password();
print("Password:", password);
}
main.sem.jac:
sem generate_password = """\
Generates and returns password that:
- contain at least 8 characters
- contain at least one uppercase letter
- contain at least one lowercase letter
- contain at least one digit
- contain at least one special character
""";
sem User = "A class representing a user account.";
sem User.name = "The full name of the user.";
sem User.email = "The email address of the user.";
Benefits of Semantic Strings#
-
AI-Powered Development: Enable LLMs to generate function implementations based on natural language descriptions
-
Self-Documenting Code: Semantic strings serve as both documentation and functional specifications
-
Intelligent Behavior: LLMs can understand context and generate appropriate responses based on semantic descriptions
-
Rapid Prototyping: Quickly create functional prototypes without writing detailed implementations
-
Maintainable AI Integration: Clear separation between AI instructions and traditional code logic
-
Flexible Descriptions: Support for simple one-liners to complex multi-line instructions
-
Hierarchical Context: Nested semantic descriptions for complex object structures
-
Method-Agnostic: Works with various LLM providers and reasoning methods
Global variables#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Global variables provide module-level data storage that persists throughout program execution and can be accessed across different scopes within a Jac program. Jac offers two declaration keywords with distinct semantic meanings and access control capabilities.
Declaration Keywords#
let
Keyword: Declares module-level variables with lexical scoping semantics, suitable for configuration values and module-local state that may be reassigned during execution.
glob
Keyword: Explicitly declares global variables with program-wide scope, emphasizing their global nature and intended use for shared state across multiple modules or components.
Access Control Modifiers#
Jac provides three access control levels for global variables:
:priv
: Private to the current module, preventing external access:pub
: Publicly accessible from other modules and external code:protect
: Protected access with limited external visibility
When no access modifier is specified, variables default to module-level visibility with standard scoping rules.
Syntax and Usage#
let:priv config_value = "development";
glob:pub shared_counter = 0;
glob:protect system_state = "initialized";
glob default_timeout = 30;
Integration with Entry Points#
Global variables integrate seamlessly with entry blocks and named execution contexts:
let:priv module_data = initialize_data();
glob:pub api_version = "2.1";
with entry:main {
print(f"Module data: {module_data}");
print(f"API Version: {api_version}");
# Global variables remain accessible throughout execution
process_with_globals();
}
Common Usage Patterns#
Configuration Management: Global variables provide centralized configuration storage accessible across the entire program without parameter passing.
Shared State: Multiple components can access and modify shared program state through globally accessible variables.
Module Interfaces: Public global variables create clean interfaces between modules, exposing necessary data while maintaining encapsulation through access controls.
System Constants: Global variables store system-wide constants and settings that remain consistent throughout program execution.
Global variables complement Jac's object-spatial programming model by providing persistent state that walkers and other computational entities can access during graph traversal and distributed computation operations.
Free code#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Free code in Jac refers to executable statements that exist at the module level but are not part of a function, class, or other structural element. Unlike many programming languages that allow loose statements to float freely in a module, Jac requires such code to be explicitly wrapped in with entry
blocks for better code organization and clarity.
Entry Blocks
The with entry
construct serves as a container for free-floating code that should execute when the module is run. This design choice promotes:
- Code cleanliness: Makes module structure more explicit and organized
- Readability: Clearly identifies executable code vs. definitions
- Maintainability: Reduces ambiguity about what runs when
Basic Syntax
Named Entry Points
Entry blocks can optionally be given names for specific execution contexts:
This type of block can be used to define the program's initialization and execution starting point, similar to Python's if __name__ == "__main__"
: idiom. This design decision creates a clear separation between declarations and executable code at the module level, leading to more maintainable and better-organized programs. Note that declaring multiple instances of with entry
in one script is supported and, they will be executed one after the other, top to bottom.
Here's a with example usage of a named block:
A typical Jac module structure includes:
- Import statements: Bringing in external dependencies
- Type definitions: Classes, objects, and other archetype definitions
- Function definitions: Standalone functions and abilities
- Entry blocks: Executable code that runs when the module is executed
Use Cases
Entry blocks are commonly used for:
- Main program logic: The primary execution flow of a script
- Initialization code: Setting up module state or configuration
- Testing and examples: Demonstrating how defined functions and classes work
- Script execution: Code that should run when the module is executed directly
Interaction with Definitions
Code within entry blocks can access and use any functions, classes, and variables defined elsewhere in the module. The provided example demonstrates this by:
- Defining a
circle
object withinit
andarea
methods - Defining a standalone
foo
function - Using both within the entry block to perform calculations and print results
The entry block executes the main program logic: printing "Hello World!", calling the foo
function with argument 7, and creating a circle instance to calculate and display its area.
This approach ensures that Jac modules maintain a clear separation between definitions and executable code, leading to more maintainable and understandable programs.
Inline python#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Inline Python in Jac provides a powerful mechanism to seamlessly integrate native Python code within Jac programs. This feature enables developers to leverage the vast Python ecosystem and existing Python libraries directly within their Jac applications.
Inline Python Syntax
Python code can be embedded in Jac using the ::py::
directive:
The ::py::
markers act as delimiters that tell the Jac compiler to treat the enclosed content as native Python code rather than Jac syntax.
Integration with Jac Code
Inline Python code can coexist with Jac code in the same module. Variables, functions, and classes defined in Python blocks are accessible to subsequent Jac code, and vice versa, creating a seamless integration between the two languages.
Use Cases
Inline Python is particularly useful for:
- Library Integration: Using existing Python libraries that don't have Jac equivalents
- Performance Critical Code: Writing performance-sensitive algorithms in Python
- Legacy Code Reuse: Incorporating existing Python code into new Jac projects
- Gradual Migration: Transitioning from Python to Jac incrementally
- Specialized Operations: Accessing Python-specific features or libraries
Execution Context
The Python code executes in the same runtime environment as the Jac code, sharing the same namespace and variable scope. This allows for natural interaction between Jac and Python components.
Example Usage
The provided code example demonstrates a simple integration where:
- Jac code prints "hello " using the standard Jac print function
- An inline Python block defines a function
foo()
that prints "world" - The Python function is called immediately within the same Python block
This creates a seamless output of "hello world" by combining Jac and Python execution.
Best Practices
When using inline Python:
- Keep Python blocks focused and cohesive
- Document the purpose of Python integration
- Consider whether the functionality could be achieved in pure Jac
- Be mindful of the mixing of language paradigms for code maintainability
Inline Python support makes Jac highly interoperable with the Python ecosystem while maintaining the benefits of Jac's unique language features.
Tests#
Code Example
Runnable Example in Jac and JacLib
test test1 {
check almostEqual(4.99999, 4.99999);
}
test test2 {
check 5 == 5;
}
test test3 {
check "e" in "qwerty";
}
with entry:__main__ {
import subprocess;
result = subprocess.run(
["jac", "test", f"{__file__}"],
stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True
);
print(result.stderr);
}
Jac Grammar Snippet
Description
Tests in Jac provide built-in support for unit testing and validation of code functionality. The test
keyword creates test blocks that can be executed to verify program correctness.
Syntax#
Basic Testing#
Simple test assertions:
test "basic arithmetic" {
assert 2 + 2 == 4;
assert 10 - 5 == 5;
assert 3 * 4 == 12;
assert 15 / 3 == 5;
}
test {
# Anonymous test
x = 10;
y = 20;
assert x < y;
assert x + y == 30;
}
Testing Functions#
Validate function behavior:
can calculate_area(radius: float) -> float {
return 3.14159 * radius * radius;
}
test "area calculation" {
assert calculate_area(1.0) == 3.14159;
assert calculate_area(2.0) == 12.56636;
assert abs(calculate_area(3.0) - 28.27431) < 0.00001;
}
Testing Objects and Classes#
Test object creation and methods:
obj Rectangle {
has width: float;
has height: float;
can area -> float {
return self.width * self.height;
}
can perimeter -> float {
return 2 * (self.width + self.height);
}
}
test "rectangle operations" {
rect = Rectangle(width=5.0, height=3.0);
assert rect.area() == 15.0;
assert rect.perimeter() == 16.0;
# Test property modification
rect.width = 10.0;
assert rect.area() == 30.0;
}
Testing Graph Operations#
Test node and edge functionality:
node DataNode {
has value: int;
}
edge Connection {
has weight: float = 1.0;
}
test "graph construction" {
# Create nodes
n1 = DataNode(value=10);
n2 = DataNode(value=20);
n3 = DataNode(value=30);
# Connect nodes
n1 ++>:Connection:++> n2;
n2 ++>:Connection(weight=2.0):++> n3;
# Test connections
assert len([n1 -->]) == 1;
assert len([n2 <--]) == 1;
assert len([n2 -->]) == 1;
# Test edge properties
edge = [n2 -->:Connection:][0];
assert edge.weight == 2.0;
}
Testing Walkers#
Verify walker behavior:
walker TestWalker {
has visited: list = [];
can traverse with entry {
self.visited.append(here.value);
visit [-->];
}
}
test "walker traversal" {
# Setup graph
root = DataNode(value=1);
child1 = DataNode(value=2);
child2 = DataNode(value=3);
root ++> child1;
root ++> child2;
# Test walker
walker = TestWalker();
result = walker spawn root;
assert 1 in walker.visited;
assert 2 in walker.visited;
assert 3 in walker.visited;
assert len(walker.visited) == 3;
}
Exception Testing#
Test error handling:
can divide(a: float, b: float) -> float {
if b == 0 {
raise ZeroDivisionError("Cannot divide by zero");
}
return a / b;
}
test "exception handling" {
# Test normal operation
assert divide(10, 2) == 5;
# Test exception
error_raised = False;
try {
divide(10, 0);
} except ZeroDivisionError {
error_raised = True;
}
assert error_raised;
}
Parameterized Testing#
Test with multiple inputs:
test "parameterized validation" {
test_cases = [
{"input": 0, "expected": "zero"},
{"input": 1, "expected": "positive"},
{"input": -1, "expected": "negative"}
];
for case in test_cases {
result = classify_number(case["input"]);
assert result == case["expected"];
}
}
Setup and Teardown#
Organize test environment:
test "with setup and cleanup" {
# Setup
temp_file = create_temp_file();
original_state = save_current_state();
try {
# Test operations
write_data(temp_file, "test data");
assert file_exists(temp_file);
assert read_data(temp_file) == "test data";
} finally {
# Cleanup
delete_file(temp_file);
restore_state(original_state);
}
}
Testing Async Operations#
Test asynchronous code:
test "async operations" {
async can fetch_data -> str {
await simulate_delay(0.1);
return "async result";
}
# Test async function
result = await fetch_data();
assert result == "async result";
}
Test Organization#
Group related tests:
# Math operations tests
test "addition operations" {
assert add(2, 3) == 5;
assert add(-1, 1) == 0;
assert add(0, 0) == 0;
}
test "multiplication operations" {
assert multiply(2, 3) == 6;
assert multiply(-2, 3) == -6;
assert multiply(0, 5) == 0;
}
# String operations tests
test "string manipulation" {
assert uppercase("hello") == "HELLO";
assert lowercase("WORLD") == "world";
assert capitalize("jac") == "Jac";
}
Best Practices#
- Descriptive Names: Use clear test names that explain what's being tested
- Single Responsibility: Each test should verify one specific behavior
- Independent Tests: Tests shouldn't depend on each other
- Clear Assertions: Make test expectations obvious
- Test Edge Cases: Include boundary conditions and error cases
Running Tests#
Tests can be executed: - Individually by name - All tests in a module - Tests matching a pattern - With verbose output for debugging
Integration Testing#
Test complete workflows:
test "end-to-end graph processing" {
# Build complex graph
graph = build_test_graph();
# Run processing walker
processor = DataProcessor();
results = processor spawn graph.root;
# Verify results
assert len(results) == expected_count;
assert all_nodes_processed(graph);
assert results.summary.errors == 0;
}
Tests in Jac provide a comprehensive framework for validating code correctness, from simple unit tests to complex integration scenarios, ensuring robust and reliable applications.
Codeblocks and Statements#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Code blocks and statements form the structural foundation of Jac programs, organizing executable code into logical units and providing the syntactic framework for all program operations.
Code Block Structure#
Code blocks use curly brace delimiters to group related statements into executable units:
Code blocks establish scope boundaries for variables and provide organizational structure for complex operations. They can be nested arbitrarily deep, enabling hierarchical program organization.
Statement Categories#
Jac supports several categories of statements that serve different purposes:
Declaration Statements: Define functions, variables, and archetypes within the current scope, establishing named entities that can be referenced by subsequent code.
Expression Statements: Execute expressions for their side effects, including function calls, assignments, and object-spatial operations.
Control Flow Statements: Direct program execution through conditionals, loops, and exception handling constructs.
Object-Spatial Statements: Control walker movement and graph traversal operations, including visit, ignore, and disengage statements.
Statement Termination#
Most statements require semicolon termination to establish clear boundaries between executable units:
Control structures and block statements typically do not require semicolons as their block structure provides natural termination.
Scope and Visibility#
Code blocks create lexical scopes where variables and functions defined within the block are accessible to nested blocks but not to parent scopes:
with entry {
let local_var = "accessible within this block";
def helper_function() {
# Can access local_var from enclosing scope
return local_var.upper();
}
print(helper_function());
}
# local_var and helper_function not accessible here
Integration with Object-Spatial Constructs#
Code blocks work seamlessly with object-spatial programming constructs, providing structured contexts for walker abilities and node operations:
walker Processor {
can process with entry {
# Code block within ability
let result = analyze_data(here.data);
if (result.is_valid) {
visit here.neighbors;
} else {
report "Invalid data at node";
disengage;
}
}
}
Code blocks provide the essential organizational structure that enables clear, maintainable Jac programs while supporting both traditional programming patterns and object-spatial computation models.
If statements#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
If statements provide conditional execution control, enabling programs to make decisions based on boolean expressions. Jac's if statement syntax supports the familiar if-elif-else pattern with mandatory code blocks, ensuring clear and safe conditional logic.
Basic Conditional Syntax#
If statements follow a structured pattern with required code blocks:
The condition must evaluate to a boolean value, and the code block is enclosed in mandatory curly braces for clarity and consistency.
Complete Conditional Structure#
The full conditional structure supports multiple decision branches:
if primary_condition {
# executed when primary condition is true
} elif secondary_condition {
# executed when secondary condition is true
} else {
# executed when no conditions are true
}
Chained Comparison Operations#
Jac supports elegant chained comparisons for range checking and multiple relationships:
score = 85;
if 0 <= score <= 59 {
grade = "F";
} elif 60 <= score <= 69 {
grade = "D";
} elif 70 <= score <= 79 {
grade = "C";
} elif 80 <= score <= 89 {
grade = "B";
} else {
grade = "A";
}
Chained comparisons provide natural mathematical notation that improves readability and reduces the need for complex boolean expressions.
Boolean Logic Integration#
If statements work with complex boolean expressions using logical operators:
# Logical AND
if user.is_authenticated and user.has_permission("read") {
display_content();
}
# Logical OR
if is_admin or is_moderator {
access_admin_panel();
}
# Logical NOT
if not is_maintenance_mode {
process_requests();
}
# Complex combinations
if (user.age >= 18 and user.verified) or user.has_guardian_consent {
allow_registration();
}
Sequential Evaluation Behavior#
Elif statements provide efficient multi-way branching with sequential evaluation:
temperature = 75;
if temperature < 32 {
status = "freezing";
} elif temperature < 50 {
status = "cold"; # Only checked if temperature >= 32
} elif temperature < 80 {
status = "comfortable"; # Only checked if temperature >= 50
} else {
status = "hot"; # Only if temperature >= 80
}
Once a condition matches, remaining elif and else blocks are skipped, ensuring exactly one block executes and optimizing performance.
Object-Spatial Integration#
If statements integrate seamlessly with object-spatial programming constructs:
walker PathValidator {
can validate_path with entry {
if here.is_accessible {
# Continue traversal
visit [-->];
} elif here.has_alternate_route {
# Try alternate path
visit here.alternate_nodes;
} else {
# No valid path found
report "Path blocked at node";
disengage;
}
}
}
node SecurityNode {
has access_level: int;
can check_access with visitor entry {
if visitor.security_clearance >= self.access_level {
visitor.grant_access();
} else {
visitor.deny_access();
# Prevent further traversal
}
}
}
Type-Safe Conditional Operations#
Jac's type system ensures conditional safety through compile-time checking:
# Type checking with isinstance
if isinstance(data, dict) {
process_dictionary(data);
} elif isinstance(data, list) {
process_list(data);
}
# Null safety patterns
if user_input is not None {
validated_input = validate(user_input);
if validated_input.is_valid {
process_input(validated_input);
}
}
Nested Conditional Patterns#
If statements support nesting for complex decision trees:
walker DecisionMaker {
can make_decision with entry {
if here.has_data {
if here.data.is_valid {
if here.data.priority == "high" {
process_immediately(here.data);
} else {
queue_for_processing(here.data);
}
} else {
clean_invalid_data(here);
}
} else {
request_data_update(here);
}
}
}
Conditional Expression Support#
If statements work with various expression types:
# Function call conditions
if validate_credentials(username, password) {
login_user(username);
}
# Property access conditions
if node.status == "active" and node.load < threshold {
assign_task(node, new_task);
}
# Collection membership
if user_id in authorized_users {
grant_access();
}
# Complex expressions
if calculate_risk_score(transaction) > risk_threshold {
flag_for_review(transaction);
}
Performance Optimization#
If statements include several performance optimizations:
Short-Circuit Evaluation: Logical operators (and
, or
) stop evaluation as soon as the result is determined, minimizing unnecessary computation.
Branch Prediction: The compiler optimizes frequently taken branches based on usage patterns.
Condition Ordering: Place most likely conditions first in elif chains for optimal performance.
Common Conditional Patterns#
Input Validation:
Range Validation:
Error Handling:
if operation.has_error() {
log_error(operation.get_error());
return default_value;
} else {
return operation.get_result();
}
Configuration-Based Logic:
if config.debug_enabled {
log_debug_info(current_state);
}
if config.feature_flags.new_algorithm {
use_new_algorithm();
} else {
use_legacy_algorithm();
}
Integration with Graph Traversal#
If statements enable sophisticated conditional traversal patterns:
walker SmartTraverser {
has visited: set = set();
can traverse with entry {
# Avoid cycles
if here in self.visited {
disengage;
}
self.visited.add(here);
# Conditional traversal based on node properties
if here.node_type == "data" {
process_data_node(here);
visit [-->:DataEdge:];
} elif here.node_type == "control" {
if here.should_continue() {
visit [-->];
} else {
disengage;
}
}
}
}
If statements provide the foundation for decision-making in Jac programs, supporting both traditional programming patterns and sophisticated object-spatial operations with clear, readable syntax and robust type safety.
While statements#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
While statements in Jac provide iterative execution based on conditional expressions, enabling loops that continue as long as a specified condition remains true. The while loop syntax offers a fundamental control structure for implementing algorithms that require repeated execution with dynamic termination conditions.
Basic While Loop Syntax
While statements follow this pattern from the grammar:
Example Implementation
The provided example demonstrates a basic counting loop:
Execution flow:
1. Initialize counter variable i = 1
2. Check condition i < 6
(true, so enter loop)
3. Execute loop body: print i
and increment i
4. Check condition again with new value
5. Repeat until condition becomes false
6. Exit loop when i
reaches 6
Key Components
Condition Expression - Evaluated before each iteration - Must be a boolean expression or evaluable to boolean - Loop continues while condition is true - Loop exits when condition becomes false
Code Block
- Enclosed in curly braces {}
- Contains statements to execute repeatedly
- Should modify variables that affect the condition to avoid infinite loops
- Can contain any valid Jac statements
Loop Control Variables
While loops typically require explicit management of control variables:
Counter-Based Loops
Condition-Based Loops
Iterator-Based Loops
Common While Loop Patterns
Input Processing
user_input = get_input();
while user_input != "quit" {
process_command(user_input);
user_input = get_input();
}
Search Operations
found = false;
index = 0;
while index < data.length and not found {
if data[index] == target {
found = true;
} else {
index += 1;
}
}
Convergence Algorithms
error = calculate_error();
iteration = 0;
while error > tolerance and iteration < max_iterations {
update_parameters();
error = calculate_error();
iteration += 1;
}
Infinite Loop Prevention
While loops require careful design to avoid infinite loops:
Loop Guards
attempts = 0;
max_attempts = 100;
while condition and attempts < max_attempts {
# loop body
attempts += 1;
}
Progress Verification
previous_value = initial_value;
while not converged {
current_value = compute_next();
if current_value == previous_value {
break; # Prevent infinite loop
}
previous_value = current_value;
}
Integration with Control Statements
While loops work with control flow statements:
Break Statement
while true {
input = get_input();
if input == "exit" {
break; # Exit loop immediately
}
process(input);
}
Continue Statement
Nested While Loops
While loops can be nested for complex iteration patterns:
row = 0;
while row < height {
col = 0;
while col < width {
process_cell(row, col);
col += 1;
}
row += 1;
}
Object-Spatial Integration
While loops work within object-spatial contexts:
Walker State Loops
walker Processor {
can process with `node entry {
attempts = 0;
while not here.is_processed and attempts < 3 {
here.attempt_processing();
attempts += 1;
}
if here.is_processed {
visit [-->];
}
}
}
Node Processing Loops
node BatchProcessor {
can process_batch with Worker entry {
batch_index = 0;
while batch_index < self.batch_size {
self.process_item(batch_index);
batch_index += 1;
}
}
}
Collection Processing
While loops for manual collection iteration:
Array Processing
index = 0;
while index < items.length {
item = items[index];
if item.needs_processing {
process(item);
}
index += 1;
}
Dynamic Collections
while queue.has_items() {
item = queue.dequeue();
result = process(item);
if result.creates_new_items {
queue.enqueue_all(result.new_items);
}
}
Performance Considerations
Condition Evaluation - Condition is evaluated before every iteration - Complex conditions can impact performance - Consider caching expensive calculations
Loop Optimization
# Less efficient
while expensive_function() < threshold {
# loop body
}
# More efficient
limit = expensive_function();
while counter < limit {
# loop body
counter += 1;
}
Memory Usage - Variables declared inside loops are recreated each iteration - Consider declaring outside loop when appropriate
Comparison with For Loops
While loops are preferred when: - Termination condition is complex or dynamic - Number of iterations is unknown in advance - Loop control requires custom logic
For loops are preferred when: - Iterating over collections - Counter-based iteration with known bounds - Standard increment/decrement patterns
Error Handling in While Loops
while has_work() {
try {
task = get_next_task();
task.execute();
} except TaskError as e {
log_error(e);
continue; # Skip failed task, continue with next
} except CriticalError as e {
log_critical(e);
break; # Exit loop on critical error
}
}
Best Practices
- Always modify loop variables: Ensure the condition can eventually become false
- Use meaningful conditions: Make loop termination logic clear
- Avoid complex conditions: Keep conditions simple and readable
- Include safety guards: Prevent infinite loops with counters or timeouts
- Consider alternatives: Use for loops when appropriate for better readability
Common Pitfalls
- Infinite loops: Forgetting to modify condition variables
- Off-by-one errors: Incorrect boundary conditions
- Uninitialized variables: Using undefined variables in conditions
- Side effects: Unexpected condition changes from function calls
While statements in Jac provide essential iterative control for scenarios requiring dynamic loop termination. They complement for loops by handling cases where the number of iterations is not predetermined, making them valuable for algorithms involving search, convergence, and event-driven processing.
For statements#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
For statements provide powerful iteration mechanisms with multiple syntax variants designed for different looping scenarios. Jac supports both traditional iteration patterns and expressive loop constructs that enhance readability while reducing common programming errors.
For Loop Variants#
Jac offers three distinct for loop syntaxes:
For-In Loops: Iterate over collections and iterables with clean, readable syntax.
For-To-By Loops: Explicit counter-based iteration with clear initialization, termination, and increment specifications.
Async For Loops: Asynchronous iteration for concurrent processing patterns.
For-In Loop Syntax#
For-in loops provide clean iteration over collections and sequences:
This syntax works with all iterable types including strings, lists, ranges, and custom collections.
String and Character Iteration#
String iteration processes each character individually, providing natural text processing capabilities.
Range-Based Iteration#
for number in range(1, 5) {
print(number); # Prints 1, 2, 3, 4
}
for index in range(len(array)) {
process(array[index]);
}
Range objects generate sequences efficiently with exclusive end boundaries, following Python conventions.
Collection Iteration#
For-in loops work seamlessly with all Jac collection types:
# List iteration
for item in [1, 2, 3, 4, 5] {
process_item(item);
}
# Dictionary key iteration
for key in {"name": "John", "age": 30} {
print(f"{key}: {data[key]}");
}
# Set iteration
for element in {1, 2, 3, 4} {
validate_element(element);
}
For-To-By Loop Syntax#
For-to-by loops provide explicit control over counter-based iteration:
This syntax makes loop components explicit and reduces common iteration errors.
For-To-By Examples#
# Basic counting
for i=0 to i<10 by i+=1 {
print(i); # Prints 0 through 9
}
# Custom increments
for count=100 to count>0 by count-=5 {
print(f"Countdown: {count}");
}
# Complex conditions
for x=1.0 to x<=100.0 by x*=1.5 {
# Exponential growth pattern
process_value(x);
}
Nested Loop Patterns#
Different loop syntaxes can be combined for complex iteration patterns:
for outer_char in "abc" {
for inner_num in range(1, 3) {
for counter=1 to counter<=2 by counter+=1 {
print(f"{outer_char}-{inner_num}-{counter}");
}
}
}
This demonstrates the flexibility of mixing for-in and for-to-by syntaxes based on specific needs.
Advanced For-In Patterns#
Enumeration with Index:
Dictionary Items:
Destructuring Assignment:
Object-Spatial Integration#
For loops integrate naturally with object-spatial programming constructs:
walker GraphTraverser {
can traverse_neighbors with entry {
# Iterate over connected nodes
for neighbor in [-->] {
if neighbor.is_processable {
visit neighbor;
}
}
}
can process_edges with entry {
# Iterate over specific edge types
for edge in [-->:DataEdge:] {
edge.process_data();
}
}
}
node CollectionNode {
has items: list;
can process_items with visitor entry {
for item in self.items {
result = visitor.process_item(item);
if result.should_stop {
break;
}
}
}
}
Control Flow Integration#
For loops work seamlessly with control statements:
for item in large_collection {
if item.should_skip() {
continue; # Skip to next iteration
}
if item.is_terminal() {
break; # Exit loop entirely
}
process_item(item);
}
Asynchronous For Loops#
For asynchronous iteration over async iterables:
async for data_chunk in async_data_stream {
processed = await process_chunk(data_chunk);
await store_result(processed);
}
Async for loops enable efficient processing of streaming data and concurrent operations.
Performance Considerations#
For-In Optimization: Optimized for collection traversal with minimal memory overhead and efficient iterator protocols.
For-To-By Optimization: Specialized arithmetic operations and efficient condition evaluation for counter-based loops.
Memory Efficiency: Iterators generate values on demand, supporting large datasets without excessive memory usage.
Complex Iteration Patterns#
Multi-Variable For-To-By:
for i=0, j=len(array)-1 to i<j by i+=1, j-=1 {
# Two-pointer technique
if array[i] + array[j] == target {
return (i, j);
}
}
Conditional Iteration:
for item in collection if item.is_valid() {
# Only iterate over valid items
process_valid_item(item);
}
Batch Processing:
for batch in chunked(large_dataset, batch_size=1000) {
process_batch(batch);
if should_pause() {
break;
}
}
Graph Traversal Patterns#
For loops enable sophisticated graph processing:
walker PathAnalyzer {
has path_lengths: dict = {};
can analyze_paths with entry {
# Analyze all possible paths
for target_node in [-->*] {
path_length = calculate_distance(here, target_node);
self.path_lengths[target_node.id] = path_length;
}
# Process paths by length
for length=1 to length<=max_depth by length+=1 {
nodes_at_distance = [n for n, d in self.path_lengths.items() if d == length];
for node in nodes_at_distance {
process_node_at_distance(node, length);
}
}
}
}
Error Handling in Loops#
for item in potentially_problematic_collection {
try {
result = risky_operation(item);
store_result(result);
} except ProcessingError as e {
log_error(f"Failed to process {item}: {e}");
continue; # Skip problematic items
}
}
Best Practices#
Choose Appropriate Syntax: Use for-in for collections, for-to-by for explicit counter control.
Clear Variable Names: Use descriptive names that indicate the purpose of loop variables.
Avoid Side Effects: Minimize modifications to collections during iteration to prevent unexpected behavior.
Performance Awareness: Consider memory usage and computational complexity for large datasets.
Control Flow: Use break and continue judiciously to implement complex iteration logic clearly.
For statements provide flexible, expressive iteration capabilities that support both traditional programming patterns and modern object-spatial operations, enabling developers to write clear, efficient code for a wide range of computational scenarios.
Try statements#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Try statements provide exception handling mechanisms in Jac, enabling robust error management and graceful recovery from runtime errors. This construct supports structured exception handling with try, except, else, and finally blocks.
Syntax#
try {
# code that may raise exceptions
} except ExceptionType as e {
# handle specific exception
} except {
# handle any exception
} else {
# executed if no exception occurs
} finally {
# always executed
}
Basic Exception Handling#
try {
result = risky_operation();
process(result);
} except ValueError as e {
print(f"Invalid value: {e}");
} except IOError {
print("IO operation failed");
}
Multiple Exception Types#
Handle different exceptions with specific responses:
walker DataProcessor {
can process with entry {
try {
data = here.load_data();
validated = validate(data);
here.result = transform(validated);
} except FileNotFoundError as e {
report {"error": "missing_data", "node": here};
} except ValidationError as e {
report {"error": "invalid_data", "details": str(e)};
} except Exception as e {
report {"error": "unexpected", "type": type(e).__name__};
}
}
}
Else Clause#
Execute code only when no exceptions occur:
can safe_divide(a: float, b: float) -> float {
try {
result = a / b;
} except ZeroDivisionError {
print("Division by zero!");
return 0.0;
} else {
print(f"Successfully computed {a}/{b} = {result}");
return result;
}
}
Finally Clause#
Guarantee cleanup code execution:
can process_file(filename: str) -> dict {
file_handle = None;
try {
file_handle = open_file(filename);
data = parse_data(file_handle);
return process(data);
} except IOError as e {
log_error(f"File operation failed: {e}");
return {};
} finally {
if file_handle {
file_handle.close();
print("File handle closed");
}
}
}
Graph Operations Error Handling#
Robust walker traversal:
walker SafeTraverser {
has errors: list = [];
can traverse with entry {
try {
# Process current node
here.process();
# Get next nodes safely
next_nodes = [-->];
# Visit each node
for n in next_nodes {
try {
visit n;
} except NodeAccessError as e {
self.errors.append({
"source": here,
"target": n,
"error": str(e)
});
}
}
} except ProcessingError as e {
report {"failed_node": here, "error": e};
# Continue traversal despite error
}
}
}
Resource Management Pattern#
Using try-finally for resource cleanup:
node DatabaseNode {
has connection: any = None;
can query(sql: str) -> list {
try {
self.connection = create_connection();
cursor = self.connection.cursor();
try {
cursor.execute(sql);
return cursor.fetchall();
} finally {
cursor.close();
}
} except DatabaseError as e {
log_error(f"Query failed: {e}");
return [];
} finally {
if self.connection {
self.connection.close();
self.connection = None;
}
}
}
}
Nested Try Blocks#
Handle errors at multiple levels:
can complex_operation(data: dict) -> any {
try {
# Outer level - general errors
prepared = prepare_data(data);
try {
# Inner level - specific operation
result = critical_process(prepared);
return finalize(result);
} except CriticalError as e {
# Handle critical errors specifically
return handle_critical(e);
}
} except Exception as e {
# Catch-all for unexpected errors
log_unexpected(e);
return default_value();
}
}
Custom Exception Handling#
Define and handle custom exceptions:
class GraphError(Exception) {}
class NodeNotFoundError(GraphError) {}
class CycleDetectedError(GraphError) {}
walker GraphValidator {
can validate with entry {
try {
check_node_integrity(here);
detect_cycles(here);
validate_connections(here);
} except NodeNotFoundError as e {
report {"error": "missing_node", "details": e};
} except CycleDetectedError as e {
report {"error": "cycle_found", "nodes": e.cycle_nodes};
} except GraphError as e {
report {"error": "graph_invalid", "reason": str(e)};
}
}
}
Best Practices#
- Specific Exceptions First: Order except blocks from most to least specific
- Minimal Try Blocks: Keep try blocks focused on code that may fail
- Always Clean Up: Use finally for resource cleanup
- Meaningful Error Messages: Provide context in error handling
- Don't Suppress Errors: Avoid empty except blocks
Integration with Object-Spatial#
Exception handling in graph contexts:
walker ResilientWalker {
has retry_count: int = 3;
has failed_nodes: list = [];
can process with entry {
attempts = 0;
while attempts < self.retry_count {
try {
result = here.complex_operation();
report {"success": here, "result": result};
break;
} except TemporaryError as e {
attempts += 1;
if attempts >= self.retry_count {
self.failed_nodes.append(here);
report {"failed": here, "attempts": attempts};
}
wait_exponential(attempts);
} except PermanentError as e {
self.failed_nodes.append(here);
report {"permanent_failure": here, "error": e};
break;
}
}
# Continue traversal regardless of errors
visit [-->];
}
}
Try statements in Jac provide comprehensive error handling capabilities, essential for building robust applications that gracefully handle failures while maintaining system stability.
Match statements#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Match statements provide powerful pattern matching capabilities in Jac, enabling elegant handling of complex data structures and control flow based on value patterns. This feature supports structural pattern matching similar to modern programming languages.
Syntax#
match expression {
case pattern:
# statements
case pattern if condition:
# guarded pattern statements
}
Pattern Types#
Literal Patterns#
Match specific literal values:
match value {
case 42:
print("The answer");
case 3.14:
print("Pi approximation");
case "hello":
print("Greeting");
}
Capture Patterns#
Bind matched values to variables:
Sequence Patterns#
Match lists and tuples:
match point {
case [x, y]:
print(f"2D point: ({x}, {y})");
case [x, y, z]:
print(f"3D point: ({x}, {y}, {z})");
case [first, *rest]:
print(f"First: {first}, Rest: {rest}");
}
Mapping Patterns#
Match dictionary structures:
match config {
case {"host": host, "port": port}:
connect(host, port);
case {"url": url, **options}:
connect_url(url, options);
}
Class Patterns#
Match object instances and extract attributes:
match shape {
case Circle(radius=r):
print(f"Circle area: {3.14 * r * r}");
case Rectangle(width=w, height=h):
print(f"Rectangle area: {w * h}");
}
OR Patterns#
Match multiple patterns:
AS Patterns#
Bind the entire match while matching pattern:
Guard Conditions#
Add conditions to patterns:
match user {
case {"age": age, "role": role} if age >= 18:
grant_access(role);
case {"age": age} if age < 18:
deny_access("Too young");
}
Singleton Patterns#
Match None and boolean values:
match result {
case None:
print("No result");
case True:
print("Success");
case False:
print("Failure");
}
Practical Example#
node RequestHandler {
can handle(request: dict) {
match request {
case {"method": "GET", "path": path}:
self.handle_get(path);
case {"method": "POST", "path": path, "body": body}:
self.handle_post(path, body);
case {"method": "DELETE", "path": path} if self.can_delete():
self.handle_delete(path);
case {"method": method}:
self.error(f"Unsupported method: {method}");
case _:
self.error("Invalid request format");
}
}
}
Match statements in Jac provide a declarative way to handle complex conditional logic, making code more readable and maintainable while reducing the need for nested if-elif chains.
Match patterns#
Code Example
Runnable Example in Jac and JacLib
obj Point {
has x: float,
y: float;
}
def match_example(data: any) {
match data {
# MatchValue
case 42:
print("Matched the value 42.");
# MatchSingleton
case True:
print("Matched the singleton True.");
case None:
print("Matched the singleton None.");
# MatchSequence
case [1, 2, 3]:
print("Matched a specific sequence [1, 2, 3].");
# MatchStar
case [1, *rest, 3]:
print(
f"Matched a sequence starting with 1 and ending with 3. Middle: {rest}"
);
# MatchMapping
case {"key1" : 1, "key2" : 2, **rest}:
print(
f"Matched a mapping with key1 and key2. Rest: {rest}"
);
# MatchClass
case Point(int(a), y = 0):
print(f"Point with x={a} and y=0");
# MatchAs
case [1, 2, rest_val as value]:
print(
f"Matched a sequence and captured the last value: {value}"
);
# MatchOr
case [1, 2] | [3, 4]:
print("Matched either the sequence [1, 2] or [3, 4].");
case _:
print("No match found.");
}
}
with entry {
match_example(Point(x=9, y=0));
}
Jac Grammar Snippet
Description
Match literal patterns#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Match literal patterns in Jac enable direct matching against constant values including numbers, strings, and other literal expressions. These patterns provide the foundation for value-based pattern matching in match statements.
Basic Literal Pattern Syntax#
match value {
case 42 {
print("The answer to everything");
}
case "hello" {
print("Greeting detected");
}
case 3.14159 {
print("Pi approximation");
}
case true {
handle_true_case();
}
case None {
handle_null_case();
}
}
Supported Literal Types#
Numeric literals:
match status_code {
case 200 { handle_success(); }
case 404 { handle_not_found(); }
case 500 { handle_server_error(); }
}
String literals:
match command {
case "start" { start_process(); }
case "stop" { stop_process(); }
case "status" { show_status(); }
}
Different numeric bases:
match flag_value {
case 0xFF { handle_max_value(); } # Hexadecimal
case 0b1010 { handle_binary(); } # Binary
case 0o755 { handle_permissions(); } # Octal
}
Object-Spatial Pattern Matching#
walker StatusChecker {
can check_node with entry {
match here.status {
case "active" {
visit [-->];
}
case "inactive" {
skip;
}
case "error" {
report f"Error node: {here.id}";
}
}
}
}
Complex Literal Matching#
Combining with guards:
match user_input {
case "admin" if user.has_admin_rights() {
enter_admin_mode();
}
case "guest" {
enter_guest_mode();
}
}
Multiple literals:
match error_code {
case 400 | 401 | 403 {
handle_client_error(error_code);
}
case 500 | 502 | 503 {
handle_server_error(error_code);
}
}
Performance Considerations#
- Literal patterns use efficient direct comparison
- Compiler may optimize multiple literals into jump tables
- Place most common cases first for better performance
Best Practices#
- Use meaningful literal values
- Group related cases together
- Consider using named constants for magic numbers
- Combine with guards for complex conditions
Literal patterns provide a clean, efficient way to handle value-based branching in Jac programs, supporting both simple conditional logic and complex state-based processing.
Match singleton patterns#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Match singleton patterns in Jac enable matching against singleton values including None
, True
, and False
. These patterns are essential for handling null values and boolean states in pattern matching.
Singleton Pattern Syntax#
match value {
case None {
handle_null_case();
}
case True {
handle_true_case();
}
case False {
handle_false_case();
}
}
None Pattern Matching#
Boolean Pattern Matching#
Object-Spatial Integration#
walker ValidationWalker {
can validate_node with entry {
match here.data {
case None {
report f"Node {here.id} has no data";
return;
}
case data {
match data.is_valid() {
case True {
visit [-->];
}
case False {
report f"Invalid data at {here.id}";
}
}
}
}
}
}
Complex Singleton Usage#
Nested matching:
match session.get("user") {
case None {
match session.get("guest_allowed") {
case True { create_guest_session(); }
case False { reject_session(); }
}
}
case user_data {
create_user_session(user_data);
}
}
With guards:
match database_connection {
case None if retry_count < max_retries {
attempt_reconnection();
}
case None {
fail_with_error("Database unavailable");
}
case connection {
proceed_with_query(connection);
}
}
Performance Considerations#
- Uses fast identity checks for singletons
- No object creation overhead
- Optimized by compiler for common patterns
Best Practices#
- Always handle None cases explicitly
- Use singleton patterns for explicit boolean logic
- Combine with guards for complex conditions
- Prefer singleton patterns over boolean expressions in match statements
Singleton patterns provide essential building blocks for robust pattern matching, enabling clean handling of null values and boolean states while maintaining type safety.
Match capture patterns#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Match capture patterns in Jac enable binding values to variables during pattern matching, allowing programs to extract and use matched values within case blocks. Capture patterns provide the foundation for destructuring complex data structures.
Basic Capture Syntax#
match user_input {
case username {
# 'username' now contains the matched value
print(f"Hello, {username}!");
}
}
Capture with Guards#
match temperature {
case temp if temp > 100 {
handle_overheating(temp);
}
case temp if temp < 0 {
handle_freezing(temp);
}
case temp {
normal_operation(temp);
}
}
Object-Spatial Integration#
walker PatternProcessor {
can process_node with entry {
match here.node_type {
case "data" {
# Capture and process data nodes
visit [-->];
}
case node_type {
# Capture unknown node types
log_unknown_type(node_type, here);
}
}
}
}
Complex Structure Capture#
Sequence patterns:
match coordinates {
case [x, y] {
distance = (x**2 + y**2)**0.5;
return distance;
}
case coords {
# Capture any other format
return None;
}
}
Dictionary patterns:
match config_data {
case {"type": config_type, "settings": settings} {
apply_settings(settings);
}
case config {
apply_default_config(config);
}
}
Multiple Capture Patterns#
match response {
case {"success": True, "data": result} {
return result;
}
case {"success": False, "error": error_msg} {
handle_error(error_msg);
return None;
}
case response_data {
log_unexpected_response(response_data);
return None;
}
}
Scope and Performance#
- Captured variables are scoped to their case blocks
- Variables reference original matched values (no copying)
- No performance penalty for simple captures
Best Practices#
- Use meaningful variable names for captured values
- Remember that captured variables are case-scoped
- Combine captures with guards for complex conditions
- Always include a capture pattern for unmatched cases
Capture patterns provide essential functionality for extracting and working with matched values in Jac's pattern matching system, enabling elegant data destructuring in both traditional and object-spatial programming contexts.
Match sequence patterns#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Match mapping patterns#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Match class patterns#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Context managers#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Context managers in Jac provide automatic resource management through with
statements, ensuring proper acquisition and release of resources. This feature supports the context management protocol for clean handling of files, connections, locks, and other resources.
Syntax#
with expression as variable {
# code using the resource
}
# Multiple context managers
with expr1 as var1, expr2 as var2 {
# code using both resources
}
# Async context managers
async with async_expression as variable {
# async code using the resource
}
Basic Usage#
# File handling
with open("data.txt", "r") as file {
content = file.read();
process(content);
} # File automatically closed
# Database connection
with get_connection() as conn {
cursor = conn.cursor();
cursor.execute("SELECT * FROM users");
results = cursor.fetchall();
} # Connection automatically closed
Multiple Context Managers#
Manage multiple resources simultaneously:
with open("input.txt", "r") as infile,
open("output.txt", "w") as outfile {
data = infile.read();
processed = transform(data);
outfile.write(processed);
} # Both files automatically closed
Custom Context Managers#
Create your own context managers:
obj TimedOperation {
has name: str;
has start_time: float;
can __enter__(self) {
self.start_time = time.now();
print(f"Starting {self.name}");
return self;
}
can __exit__(self, exc_type, exc_val, exc_tb) {
duration = time.now() - self.start_time;
print(f"{self.name} took {duration}s");
return False; # Don't suppress exceptions
}
}
# Usage
with TimedOperation("data_processing") as timer {
process_large_dataset();
}
Graph Lock Management#
Context managers for thread-safe graph operations:
node ThreadSafeNode {
has lock: Lock = Lock();
has data: dict = {};
can safe_update(key: str, value: any) {
with self.lock {
old_value = self.data.get(key);
self.data[key] = value;
log_change(key, old_value, value);
}
}
}
Transaction Management#
Database-style transactions:
obj Transaction {
has operations: list = [];
has committed: bool = False;
can __enter__(self) {
self.begin();
return self;
}
can __exit__(self, exc_type, exc_val, exc_tb) {
if exc_type is None {
self.commit();
} else {
self.rollback();
}
return False;
}
can add_operation(op: func) {
self.operations.append(op);
}
can commit(self) {
for op in self.operations {
op();
}
self.committed = True;
}
can rollback(self) {
# Undo operations
print("Transaction rolled back");
}
}
Walker State Management#
Manage walker state during traversal:
obj WalkerContext {
has walker: walker;
has original_state: dict;
can __enter__(self) {
self.original_state = self.walker.get_state();
return self.walker;
}
can __exit__(self, exc_type, exc_val, exc_tb) {
if exc_type {
# Restore state on error
self.walker.set_state(self.original_state);
}
return False;
}
}
walker StatefulWalker {
has state: dict = {};
can process with entry {
with WalkerContext(self) as ctx {
# Modify state during processing
ctx.state["processing"] = True;
# Process node
result = here.complex_operation();
# State automatically restored on error
}
}
}
Async Context Managers#
For asynchronous resource management:
async with acquire_async_resource() as resource {
data = await resource.fetch_data();
processed = await process_async(data);
await resource.save(processed);
}
Graph Session Management#
obj GraphSession {
has graph: node;
has changes: list = [];
can __enter__(self) {
self.changes = [];
return self;
}
can __exit__(self, exc_type, exc_val, exc_tb) {
if exc_type is None {
# Apply all changes
for change in self.changes {
change.apply();
}
} else {
# Discard changes on error
print(f"Discarding {len(self.changes)} changes");
}
return False;
}
can add_node(self, node: node) {
self.changes.append(AddNodeChange(node));
}
can add_edge(self, src: node, dst: node, edge_type: type) {
self.changes.append(AddEdgeChange(src, dst, edge_type));
}
}
# Usage
with GraphSession(root) as session {
n1 = DataNode(value=10);
n2 = DataNode(value=20);
session.add_node(n1);
session.add_node(n2);
session.add_edge(n1, n2, DataEdge);
} # Changes committed atomically
Temporary State Changes#
obj TemporaryState {
has target: obj;
has attr: str;
has new_value: any;
has old_value: any;
can __enter__(self) {
self.old_value = getattr(self.target, self.attr);
setattr(self.target, self.attr, self.new_value);
return self.target;
}
can __exit__(self, exc_type, exc_val, exc_tb) {
setattr(self.target, self.attr, self.old_value);
return False;
}
}
# Usage
node ConfigNode {
has debug: bool = False;
can process_with_debug {
with TemporaryState(self, "debug", True) {
# Debug is True here
self.process_data();
}
# Debug restored to False
}
}
Best Practices#
- Always Use With: For resources that need cleanup
- Don't Suppress Exceptions: Return False from exit
- Minimal Scope: Keep with blocks focused
- Document Side Effects: Clear about what's managed
- Test Error Cases: Ensure cleanup on exceptions
Common Patterns#
Resource Pool#
with get_resource_from_pool() as resource {
# Use resource
resource.execute(operation);
} # Resource returned to pool
Nested Contexts#
with outer_context() as outer {
# Outer resource acquired
with inner_context() as inner {
# Both resources available
process(outer, inner);
} # Inner released
} # Outer released
Optional Context#
context = get_optional_context() if condition else nullcontext();
with context as ctx {
# Works whether context exists or not
process_data(ctx);
}
Context managers in Jac provide a robust pattern for resource management, ensuring proper cleanup even in the presence of errors, making code more reliable and maintainable.
Global and nonlocal statements#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Global and nonlocal statements in Jac provide mechanisms for accessing and modifying variables from outer scopes. These statements enable controlled access to variables defined outside the current function or ability scope.
Global Statement#
The global statement declares that variables refer to globally scoped names:
Basic Usage#
# Global variable
glob state: dict = {};
obj Controller {
can update_state(key: str, value: any) {
:g: state;
state[key] = value;
}
can get_state -> dict {
:g: state;
return state;
}
}
Multiple Global Variables#
glob counter: int = 0;
glob total: float = 0.0;
glob items: list = [];
can process_item(value: float) {
:g: counter, total, items;
counter += 1;
total += value;
items.append(value);
}
Nonlocal Statement#
The nonlocal statement declares that variables refer to names in the nearest enclosing scope:
Nested Function Scopes#
can create_counter -> (func) {
count = 0;
can increment -> int {
:nl: count;
count += 1;
return count;
}
return increment;
}
In Walker Abilities#
walker StateTracker {
has visited: list = [];
can track with entry {
visited_count = 0;
can log_visit {
:nl: visited_count;
visited_count += 1;
self.visited.append(here);
}
# Visit nodes and track
for node in [-->] {
visit node;
log_visit();
}
report f"Visited {visited_count} nodes";
}
}
Scope Resolution Rules#
Global Scope#
- Variables declared at module level
- Accessible throughout the module
- Require explicit
global
declaration to modify
Nonlocal Scope#
- Variables in enclosing function/ability scope
- Not global, not local
- Require explicit
nonlocal
declaration to modify
Local Scope#
- Variables defined within current function/ability
- Default scope for assignments
- Shadow outer scope variables
Common Patterns#
Configuration Management#
glob config: dict = {
"debug": False,
"timeout": 30
};
obj App {
can set_debug(enabled: bool) {
:g: config;
config["debug"] = enabled;
}
can with_timeout(seconds: int) -> func {
can run_with_timeout(fn: func) {
:g: config;
old_timeout = config["timeout"];
config["timeout"] = seconds;
result = fn();
config["timeout"] = old_timeout;
return result;
}
return run_with_timeout;
}
}
Counter Patterns#
can create_id_generator -> func {
next_id = 1000;
can generate -> int {
:nl: next_id;
id = next_id;
next_id += 1;
return id;
}
return generate;
}
State Accumulation#
walker Collector {
can collect with entry {
results = [];
errors = [];
can process_node {
:nl: results, errors;
try {
data = here.process();
results.append(data);
} except as e {
errors.append({"node": here, "error": e});
}
}
# Process all nodes
for node in [-->] {
process_node();
}
report {"results": results, "errors": errors};
}
}
Best Practices#
- Minimize Global State: Use sparingly for truly global concerns
- Prefer Parameters: Pass values explicitly when possible
- Document Side Effects: Clear comments for global modifications
- Use Nonlocal for Closures: Appropriate for nested function state
- Consider Alternatives: Class attributes or node properties
Integration with Object-Spatial#
In object-spatial contexts, consider using node/edge properties instead of global state:
# Instead of global state
glob graph_metadata: dict = {};
# Prefer node-based state
node GraphRoot {
has metadata: dict = {};
}
walker Processor {
can process with entry {
# Access via node instead of global
root.metadata["processed"] = True;
}
}
Global and nonlocal statements provide necessary escape hatches for scope management, but should be used judiciously in favor of Jac's more structured object-spatial approaches.
Object spatial typed context blocks#
Code Example
Runnable Example in Jac and JacLib
walker Producer {
can produce with `root entry;
}
node Product {
has number: int;
can make with Producer entry;
}
impl Producer.produce {
end = here;
for i=0 to i<3 by i+=1 {
end ++> (end := Product(number=i + 1));
}
visit [-->];
}
impl Product.make {
print(f"Hi, I am {self} returning a String");
visit [-->];
}
with entry {
root spawn Producer();
}
Jac Grammar Snippet
Description
Typed context blocks establish type-annotated scopes that provide compile-time type safety and runtime type assertions within data spatial operations. These blocks enhance the reliability of graph traversal and data processing by ensuring type consistency across topological boundaries.
Syntax and Structure#
The arrow syntax (->
) introduces a typed context where all operations within the block are subject to the specified type constraints. This provides both documentation and enforcement of expected data types during graph operations.
Type Safety in Data Spatial Operations#
Typed context blocks ensure type consistency when walkers traverse between nodes with varying data structures:
walker DataValidator {
can validate with entry {
-> dict[str, any] {
# Ensures node data conforms to expected structure
node_data = here.data;
validated = check_required_fields(node_data);
if (validated) {
visit here.neighbors;
}
}
}
}
Return Type Enforcement#
When combined with abilities, typed context blocks enforce return type contracts:
node ProcessingNode {
can compute_result -> list[float] {
-> list[float] {
# Guarantees return type matches declaration
raw_values = self.get_raw_data();
processed = [float(v) for v in raw_values];
return processed;
}
}
}
Integration with Graph Traversal#
Typed context blocks work seamlessly with data spatial references and traversal operations:
walker TypedTraverser {
can process with entry {
# Type-safe neighbor access
-> list[node] {
neighbors = [-->];
filtered = neighbors |> filter(|n| n.has_required_data());
return filtered;
}
# Continue traversal with type safety
visit filtered;
}
}
Nested Type Contexts#
Type contexts can be nested to provide granular type control within complex operations:
can analyze_graph -> dict[str, list[node]] {
-> dict[str, list[node]] {
categories = {};
-> list[node] {
all_nodes = [-->*]; # Get all reachable nodes
for node in all_nodes {
category = node.get_category();
if (category not in categories) {
categories[category] = [];
}
categories[category].append(node);
}
}
return categories;
}
}
Typed context blocks bridge the dynamic nature of graph traversal with static type guarantees, enabling robust data spatial programs that maintain type safety across topological boundaries while preserving the flexibility of the computation-to-data paradigm.
Return statements#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Return statements in Jac provide the mechanism for functions and methods to exit and optionally return values to their callers. The return statement syntax supports both value-returning and void functions, enabling clear control flow and data passing in function-based programming.
Basic Return Statement Syntax
Return statements follow this pattern from the grammar:
Example Implementation
The provided example demonstrates a function that returns a computed value:
Key aspects:
- Type annotation: Function specifies return type -> int
- Variable assignment: Local variable a
holds the return value
- Return expression: return a
exits function and returns the variable's value
- Caller usage: foo()
can be used in expressions like print("Returned:", foo())
Return Statement Variations
Value Returns
Expression Returns
Conditional Returns
Void Returns
Early Returns
Return statements enable early function exit:
Guard Clauses
def process_data(data: list) -> bool {
if data is None {
return false; # Early exit for invalid input
}
if len(data) == 0 {
return false; # Early exit for empty data
}
# Main processing logic
return process(data);
}
Error Conditions
def divide(a: float, b: float) -> float {
if b == 0.0 {
return float('inf'); # Early return for division by zero
}
return a / b;
}
Multiple Return Paths
Functions can have multiple return statements:
Branching Logic
def grade_score(score: int) -> str {
if score >= 90 {
return "A";
} elif score >= 80 {
return "B";
} elif score >= 70 {
return "C";
} else {
return "F";
}
}
Complex Control Flow
def search_array(arr: list, target: int) -> int {
for i=0 to i<len(arr) by i+=1 {
if arr[i] == target {
return i; # Return index when found
}
}
return -1; # Return -1 when not found
}
Return Types and Type Safety
Jac enforces return type consistency:
Type Matching
def get_name() -> str {
return "John"; # Valid: string literal
# return 42; # Error: int doesn't match str
}
Multiple Value Returns
def get_coordinates() -> (int, int) {
return (10, 20); # Return tuple
}
def get_stats() -> dict {
return {"count": 5, "average": 3.2};
}
Nullable Returns
def find_user(id: int) -> User? {
user = database.find(id);
if user.exists {
return user;
}
return None; # Explicit null return
}
Returns in Different Contexts
Method Returns
Ability Returns
walker DataCollector {
can collect_data with `node entry -> dict {
data = here.extract_data();
return data;
}
}
Lambda Returns
Object-Spatial Context Returns
Returns work within object-spatial programming:
Walker Method Returns
walker Analyzer {
can analyze with `node entry -> bool {
if here.is_valid {
analysis = here.perform_analysis();
return analysis.is_successful;
}
return false;
}
}
Node Method Returns
node DataNode {
can get_value with Reader entry -> int {
if visitor.has_permission {
return self.value;
}
return 0; # Default value for unauthorized access
}
}
Return Statement Control Flow
Function Termination - Return immediately exits the function - No code after return in the same block executes - Function control returns to the caller
Nested Block Returns
def complex_function(x: int) -> str {
if x > 0 {
return "positive"; # Exits entire function
}
# This code executes only if x <= 0
return "non-positive";
}
Loop Returns
def find_first_even(numbers: list) -> int {
for num in numbers {
if num % 2 == 0 {
return num; # Exits function and loop
}
}
return -1; # No even number found
}
Performance Considerations
Early Returns for Efficiency
def expensive_computation(data: list) -> bool {
if len(data) == 0 {
return false; # Avoid expensive computation
}
# Expensive processing only if needed
return process_data(data);
}
Avoiding Unnecessary Computation
def validate_and_process(input: str) -> str {
if not is_valid(input) {
return "Invalid input"; # Skip processing
}
return expensive_process(input);
}
Best Practices
Clear Return Logic
def is_prime(n: int) -> bool {
if n < 2 {
return false;
}
for i=2 to i*i<=n by i+=1 {
if n % i == 0 {
return false;
}
}
return true;
}
Single Responsibility
def calculate_tax(income: float) -> float {
if income <= 0 {
return 0.0;
}
# Single calculation responsibility
return income * TAX_RATE;
}
Common Patterns
Factory Functions
def create_user(name: str, age: int) -> User {
user = User();
user.name = name;
user.age = age;
return user;
}
Transformer Functions
Validator Functions
Error Handling with Returns
def safe_divide(a: float, b: float) -> (float, str) {
if b == 0.0 {
return (0.0, "Division by zero");
}
return (a / b, "Success");
}
Integration with Exception Handling
def risky_operation() -> int {
try {
result = perform_operation();
return result;
} except OperationError as e {
log_error(e);
return -1; # Error indicator
}
}
Return statements in Jac provide essential function control flow, enabling clean separation of concerns, early optimization, and clear data flow patterns. The mandatory type annotations ensure return consistency while supporting both simple value returns and complex conditional logic, making functions reliable and type-safe components in Jac applications.
Yield statements#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Yield statements in Jac provide the foundation for generator functions and iterative computation patterns. These statements enable functions to produce sequences of values on-demand rather than computing and returning entire collections at once, supporting memory-efficient iteration and lazy evaluation.
Basic Yield Statement Syntax
Yield statements follow this pattern from the grammar:
yield expression; # Yield a value
yield; # Yield nothing (None/null)
yield from iterable; # Yield all values from another iterable
Generator Function Example
The provided example demonstrates a generator function that yields multiple values:
Key aspects: - No return type: Generator functions don't specify return types like regular functions - Multiple yields: Function can yield different values at different points - Mixed types: Can yield different types (string, int, null) - Execution suspension: Function pauses at each yield and resumes when next value is requested
Generator Consumption
Generators are consumed through iteration:
x = myFunc(); # Creates generator object
for z in x { # Iterates through yielded values
print(z); # Prints: "Hello", 91, "Good Bye", None
}
Execution flow:
1. myFunc()
returns a generator object (doesn't execute function body yet)
2. First iteration calls generator, executes until first yield "Hello"
3. Second iteration resumes after first yield, executes until yield 91
4. Process continues until all yields are exhausted
5. Generator automatically raises StopIteration when function ends
Yield Expression Types
Value Yields
Expression Yields
Variable Yields
Empty Yields
Yield From Statement
The yield from
syntax delegates to another iterable:
Generator Delegation
def sub_generator {
yield 1;
yield 2;
}
def main_generator {
yield 0;
yield from sub_generator(); # Yields 1, then 2
yield 3;
}
# Result sequence: 0, 1, 2, 3
Collection Delegation
def list_generator {
yield from [1, 2, 3]; # Yields each list element
yield from "abc"; # Yields each character
}
Generator State and Memory
Generators maintain state between yields:
Stateful Generators
def counter(start: int, end: int) {
current = start;
while current <= end {
yield current;
current += 1; # State persists between yields
}
}
Local Variable Persistence
def accumulator {
total = 0;
values = [1, 2, 3, 4, 5];
for value in values {
total += value;
yield total; # Yields running sum: 1, 3, 6, 10, 15
}
}
Infinite Generators
Generators can produce infinite sequences:
Infinite Counter
Fibonacci Generator
Generator Patterns
Data Processing Pipeline
def process_data(raw_data: list) {
for item in raw_data {
if item.is_valid {
processed = item.transform();
yield processed;
}
}
}
Batch Processing
def batch_generator(data: list, batch_size: int) {
batch = [];
for item in data {
batch.append(item);
if len(batch) == batch_size {
yield batch;
batch = [];
}
}
if len(batch) > 0 {
yield batch; # Final partial batch
}
}
Resource Management
def file_line_generator(filename: str) {
file = open(filename);
try {
for line in file {
yield line.strip();
}
} finally {
file.close();
}
}
Generator Expressions vs Generator Functions
Generator Function
Generator Expression (if supported)
Integration with Object-Spatial Features
Generators work within object-spatial contexts:
Walker Data Generation
walker DataCollector {
can collect_from_nodes with `node entry {
for neighbor in [-->] {
data = neighbor.extract_data();
if data.is_valid {
yield data;
}
}
}
}
Node Data Streaming
node DataSource {
can stream_data with Reader entry {
for chunk in self.data_chunks {
yield chunk;
}
}
}
Performance Benefits
Memory Efficiency
# Memory efficient - generates values on demand
def large_sequence {
for i in range(1000000) {
yield expensive_computation(i);
}
}
# vs. Memory intensive - creates entire list
def large_list {
return [expensive_computation(i) for i in range(1000000)];
}
Lazy Evaluation
def lazy_processor(data: list) {
for item in data {
# Computation only happens when value is requested
result = expensive_operation(item);
yield result;
}
}
Generator Composition
Pipeline Pattern
def source {
for i in range(100) {
yield i;
}
}
def filter_even {
for value in source() {
if value % 2 == 0 {
yield value;
}
}
}
def transform {
for value in filter_even() {
yield value * 2;
}
}
Error Handling in Generators
def safe_generator(data: list) {
for item in data {
try {
result = risky_operation(item);
yield result;
} except Exception as e {
yield error_value(e);
}
}
}
Generator Cleanup
def resource_generator {
resource = acquire_resource();
try {
while resource.has_data() {
yield resource.next_item();
}
} finally {
resource.release(); # Cleanup when generator is destroyed
}
}
Best Practices
- Use for large datasets: Generators excel with large or infinite sequences
- Document generator behavior: Clearly specify what values are yielded
- Handle cleanup: Use try/finally for resource management
- Avoid state mutation: Be careful with shared mutable state
- Consider composition: Chain generators for processing pipelines
Common Patterns
Data Transformation
def transform_records(records: list) {
for record in records {
transformed = record.transform();
if transformed.is_valid {
yield transformed;
}
}
}
Event Generation
Pagination
def paginated_data(source: DataSource) {
page = 1;
while true {
data = source.get_page(page);
if not data {
break;
}
yield from data;
page += 1;
}
}
Yield statements in Jac enable powerful generator-based programming patterns that promote memory efficiency, lazy evaluation, and elegant iteration patterns. They provide a foundation for building data processing pipelines, handling large datasets, and implementing custom iteration protocols that integrate seamlessly with both traditional programming constructs and Jac's object-spatial features.
Raise statements#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Raise statements in Jac provide the mechanism for explicitly throwing exceptions, enabling structured error handling and control flow disruption. These statements allow functions and methods to signal error conditions, invalid states, or exceptional circumstances that require special handling by calling code.
Basic Raise Statement Syntax
Raise statements follow this pattern from the grammar:
raise exception_expression; # Raise specific exception
raise; # Re-raise current exception (in except block)
raise exception_expression from cause; # Raise with explicit cause chain
Example Implementation
The provided example demonstrates exception raising and handling:
Exception handling:
Key aspects:
- Conditional raising: Exceptions raised based on input validation
- Exception type: ValueError
indicates the type of error
- Error message: Descriptive string explaining the error condition
- Exception catching: Try-catch block handles the raised exception
Exception Types
Jac supports various built-in exception types:
Built-in Exceptions
raise ValueError("Invalid value provided");
raise TypeError("Expected int, got str");
raise IndexError("List index out of range");
raise KeyError("Dictionary key not found");
raise RuntimeError("General runtime error");
Custom Exceptions
class CustomError : Exception {
def init(message: str) {
self.message = message;
}
}
raise CustomError("Application-specific error");
Raise Patterns
Input Validation
def divide(a: float, b: float) -> float {
if b == 0.0 {
raise ZeroDivisionError("Cannot divide by zero");
}
return a / b;
}
State Validation
def process_data(data: list) {
if data is None {
raise ValueError("Data cannot be None");
}
if len(data) == 0 {
raise ValueError("Data cannot be empty");
}
# Process data
}
Type Checking
def calculate_area(shape: Shape) {
if not isinstance(shape, Shape) {
raise TypeError("Expected Shape instance");
}
return shape.calculate_area();
}
Re-raising Exceptions
Bare raise statements re-raise the current exception:
Exception Logging and Re-raising
def sensitive_operation() {
try {
risky_function();
} except Exception as e {
log_error("Operation failed", e);
raise; # Re-raise the same exception
}
}
Exception Transformation
def api_call() {
try {
internal_operation();
} except InternalError as e {
raise APIError("Public API failed") from e;
}
}
Exception Chaining
The from
clause enables exception chaining:
Explicit Cause Chain
def high_level_operation() {
try {
low_level_operation();
} except LowLevelError as e {
raise HighLevelError("High-level operation failed") from e;
}
}
Suppressing Chain
def clean_operation() {
try {
messy_operation();
} except MessyError {
raise CleanError("Clean error message") from None;
}
}
Conditional Exception Raising
Raise statements often appear in conditional contexts:
Guard Clauses
def process_file(filename: str) {
if not filename {
raise ValueError("Filename cannot be empty");
}
if not file_exists(filename) {
raise FileNotFoundError("File does not exist");
}
# Process file
}
State Machine Validation
class StateMachine {
def transition(new_state: str) {
if not self.is_valid_transition(new_state) {
raise InvalidStateError("Invalid state transition");
}
self.state = new_state;
}
}
Integration with Object-Spatial Features
Raise statements work within object-spatial contexts:
Walker Error Handling
walker DataProcessor {
can process with `node entry {
if not here.is_valid {
raise ProcessingError("Invalid node data");
}
try {
here.process_data();
} except DataError as e {
raise WalkerError("Walker processing failed") from e;
}
}
}
Node Validation
node SecureNode {
can validate_access with Visitor entry {
if not visitor.has_permission {
raise PermissionError("Access denied");
}
if visitor.security_level < self.required_level {
raise SecurityError("Insufficient security level");
}
}
}
Exception Handling Patterns
Resource Management
def acquire_resource() {
resource = None;
try {
resource = allocate_resource();
if not resource.is_valid {
raise ResourceError("Failed to allocate resource");
}
return resource;
} except Exception as e {
if resource {
resource.cleanup();
}
raise;
}
}
Retry Patterns
def retry_operation(max_attempts: int) {
for attempt=1 to attempt<=max_attempts by attempt+=1 {
try {
return perform_operation();
} except TemporaryError as e {
if attempt == max_attempts {
raise FinalError("All retry attempts failed") from e;
}
wait_before_retry();
}
}
}
Error Context Preservation
Detailed Error Information
def parse_config(config_data: str) {
try {
return json.parse(config_data);
} except JsonError as e {
line_number = e.get_line_number();
raise ConfigError("Invalid config at line {}".format(line_number)) from e;
}
}
Performance Considerations
Exception Overhead - Raising exceptions is expensive compared to normal control flow - Use exceptions for exceptional conditions, not normal program flow - Consider early validation to avoid deep call stack exceptions
Optimization Strategies
# Efficient: Check before expensive operation
def safe_operation(data: list) {
if not validate_data(data) {
raise ValidationError("Invalid data");
}
return expensive_operation(data);
}
# Less efficient: Exception in expensive operation
def unsafe_operation(data: list) {
try {
return expensive_operation(data);
} except InternalError {
raise OperationError("Operation failed");
}
}
Exception Safety
Exception-Safe Code
def atomic_operation() {
state = save_state();
try {
perform_risky_operation();
} except Exception as e {
restore_state(state);
raise;
}
}
Best Practices
- Use specific exception types: Choose appropriate exception classes
- Provide descriptive messages: Include context and possible solutions
- Validate early: Check preconditions at function entry
- Preserve exception chains: Use
from
clause for causal relationships - Clean up resources: Ensure proper cleanup even when exceptions occur
Common Exception Patterns
Factory Methods
def create_object(type_name: str) {
if type_name not in valid_types {
raise ValueError("Unknown type: {}".format(type_name));
}
return type_constructors[type_name]();
}
Protocol Validation
def send_message(message: Message) {
if not message.is_valid() {
raise ProtocolError("Invalid message format");
}
if message.size() > MAX_MESSAGE_SIZE {
raise MessageTooLargeError("Message exceeds size limit");
}
transmit(message);
}
Graceful Degradation
def get_user_preference(user_id: str, key: str) {
try {
return database.get_preference(user_id, key);
} except DatabaseError as e {
log_warning("Database error, using default", e);
raise PreferenceError("Cannot retrieve preference") from e;
}
}
Integration with Testing
def test_division_by_zero() {
try {
result = divide(10, 0);
assert false, "Expected ZeroDivisionError";
} except ZeroDivisionError {
# Test passes - expected exception
pass;
}
}
Raise statements in Jac provide essential error signaling capabilities that enable robust exception handling, clear error communication, and structured error recovery patterns. They support both simple error reporting and sophisticated exception chaining, making them valuable tools for building reliable applications that can gracefully handle exceptional conditions and provide meaningful error feedback.
Assert statements#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Assert statements in Jac provide a mechanism for debugging and testing by allowing developers to verify that certain conditions hold true during program execution. When an assertion fails, it raises an AssertionError
exception, which can be caught and handled like any other exception.
Basic Assert Syntax
The basic syntax for an assert statement is:
This will evaluate the condition, and if it is false or falsy, an AssertionError
will be raised.
Assert with Custom Message
Jac also supports assert statements with custom error messages:
When the assertion fails, the custom message will be included in the AssertionError
, making debugging easier by providing context about what went wrong.
Exception Handling
Assert statements generate AssertionError
exceptions when they fail, which can be caught using try-except blocks. This allows for graceful handling of assertion failures in production code or testing scenarios.
Use Cases
Assert statements are commonly used for:
- Input validation: Checking that function parameters meet expected conditions
- Testing: Verifying that code produces expected results
- Debugging: Ensuring that program state is as expected at specific points
- Documentation: Expressing assumptions about program behavior
The provided code example demonstrates a function foo
that asserts its input parameter value
must be positive. When called with a negative value (-5), the assertion fails and raises an AssertionError
with the message "Value must be positive", which is then caught and handled in a try-except block.
Assert statements are an essential tool for writing robust and reliable Jac programs, providing early detection of invalid conditions and helping maintain program correctness.
Check statements#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Check statements in Jac provide a built-in testing mechanism that integrates directly into the language syntax. They are primarily used within test blocks to verify that specific conditions hold true, forming the foundation of Jac's integrated testing framework.
Basic Syntax
The basic syntax for a check statement is:
The check
keyword evaluates the provided expression and verifies that it returns a truthy value. If the expression evaluates to false or a falsy value, the check fails and reports a test failure.
Integration with Test Blocks
Check statements are most commonly used within test
blocks, which are Jac's language-level construct for organizing and running tests:
Types of Checks
Check statements can verify various types of conditions:
Equality and Comparison Checks
- check a == b;
- Verifies two values are equal
- check a != b;
- Verifies two values are not equal
- check a > b;
- Verifies comparison relationships
Function Result Checks
- check almostEqual(a, 6);
- Verifies function returns truthy value
- check someFunction();
- Verifies function execution succeeds
Membership and Containment Checks
- check "d" in "abc";
- Verifies membership relationships
- check item in collection;
- Verifies containment
Expression Evaluation Checks
- check a - b == 3;
- Verifies complex expressions evaluate correctly
Testing Benefits
The integration of check statements directly into the language provides several advantages:
- Language-level support: Testing is a first-class citizen in Jac
- Simplified syntax: No need to import testing frameworks
- Clear semantics: The
check
keyword makes test intentions explicit - Integrated reporting: Failed checks are automatically reported by the language runtime
Test Organization
The provided code example demonstrates organizing multiple test cases using named test blocks (test1
, test2
, test3
, test4
), each containing specific check statements that verify different aspects of the global variables a
and b
.
Check statements make testing an integral part of Jac development, encouraging developers to write tests as they build their applications and ensuring code correctness through built-in verification mechanisms.
Delete statements#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Delete statements in Jac remove objects, nodes, edges, or properties from memory and graph structures. The del
keyword provides a unified interface for deletion operations across different contexts.
Syntax#
Deleting Variables#
Remove variables from the current scope:
# Delete single variable
x = 10;
del x; # x no longer exists
# Delete multiple variables
a, b, c = 1, 2, 3;
del a, b, c;
Deleting Object Properties#
Remove attributes from objects:
obj Person {
has name: str;
has age: int;
has email: str = "";
}
with entry {
p = Person(name="Alice", age=30, email="alice@example.com");
# Delete optional property
del p.email;
# Accessing deleted property may raise error
}
Deleting List Elements#
Remove items from lists by index:
items = [1, 2, 3, 4, 5];
# Delete by index
del items[2]; # items = [1, 2, 4, 5]
# Delete slice
del items[1:3]; # items = [1, 5]
# Delete with negative index
del items[-1]; # items = [1]
Deleting Dictionary Entries#
Remove key-value pairs:
data = {"a": 1, "b": 2, "c": 3};
# Delete by key
del data["b"]; # data = {"a": 1, "c": 3}
# Conditional deletion
if "c" in data {
del data["c"];
}
Deleting Nodes#
Remove nodes from the graph structure:
node DataNode {
has value: any;
}
walker Cleaner {
can clean with entry {
# Delete nodes matching criteria
for n in [-->] {
if n.value is None {
del n; # Node and its edges are removed
}
}
}
}
Deleting Edges#
Remove connections between nodes:
# Delete specific edge types
del source_node -->:EdgeType:--> target_node;
# Delete all outgoing edges
del node [-->];
# Delete filtered edges
del node [-->(?.weight < threshold)];
Graph Operations#
Complex deletion patterns:
walker GraphPruner {
has min_connections: int = 2;
can prune with entry {
# Delete weakly connected nodes
weak_nodes = [];
for n in [-->] {
if len(n[<-->]) < self.min_connections {
weak_nodes.append(n);
}
}
# Delete collected nodes
for n in weak_nodes {
del n; # Automatically removes associated edges
}
}
}
Edge Deletion Patterns#
node Network {
can remove_connection(target: node) {
# Delete edge between self and target
del self --> target;
}
can clear_outgoing {
# Delete all outgoing edges
del self [-->];
}
can prune_weak_edges(threshold: float) {
# Delete edges below threshold
del self [-->(?.weight < threshold)];
}
}
Cascading Deletions#
When nodes are deleted, associated edges are automatically removed:
with entry {
# Create connected structure
a = Node();
b = Node();
c = Node();
a ++> b ++> c;
# Deleting b removes edges a->b and b->c
del b;
# a and c still exist but are disconnected
}
Memory Management#
Jac handles cleanup automatically:
walker MemoryManager {
can cleanup with entry {
# Process large data
temp_nodes = [];
for i in range(1000) {
n = DataNode(data=large_object);
temp_nodes.append(n);
}
# Process nodes...
# Explicit cleanup
for n in temp_nodes {
del n;
}
# Memory is reclaimed
}
}
Best Practices#
- Check Before Delete: Verify existence before deletion
- Handle Dependencies: Consider edge deletion when removing nodes
- Batch Operations: Group deletions for efficiency
- Clean Up Resources: Delete temporary nodes/edges after use
- Document Side Effects: Deletion can affect graph connectivity
Common Patterns#
Filtered Node Deletion#
walker FilterDelete {
can delete_by_type(type_name: str) {
targets = [-->(`type_name)];
for t in targets {
del t;
}
}
}
Conditional Edge Removal#
can prune_edges(node: node, condition: func) {
edges = node[<-->];
for e in edges {
if condition(e) {
del e;
}
}
}
Safe Deletion#
Delete statements provide essential cleanup capabilities for managing memory and graph structure integrity in Jac programs. They work seamlessly with the object-spatial model to maintain consistent graph states.
Report statements#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Report statements provide a mechanism for walkers to communicate results back to their spawning context. This feature is essential for extracting information from graph traversals and object-spatial computations.
Syntax#
Purpose#
Report statements allow walkers to: - Return computed results from traversals - Aggregate data collected across multiple nodes - Communicate findings to the calling context - Build up results incrementally during traversal
Basic Usage#
walker DataCollector {
can collect with entry {
# Report individual node data
report here.data;
# Continue traversal
visit [-->];
}
}
# Spawn walker and collect reports
with entry {
results = spawn DataCollector();
# results contains all reported values
}
Multiple Reports#
Walkers can report multiple times during traversal:
walker PathFinder {
has max_depth: int = 3;
has depth: int = 0;
can explore with entry {
if self.depth >= self.max_depth {
return;
}
# Report current path node
report {
"node": here,
"depth": self.depth,
"value": here.value
};
# Explore deeper
self.depth += 1;
visit [-->];
self.depth -= 1;
}
}
Aggregating Results#
Common pattern for collecting data:
walker Aggregator {
has total: float = 0.0;
has count: int = 0;
can aggregate with entry {
# Process current node
self.total += here.value;
self.count += 1;
# Visit children
visit [-->];
}
can aggregate with exit {
# Report final aggregation
if self.count > 0 {
report {
"average": self.total / self.count,
"total": self.total,
"count": self.count
};
}
}
}
Conditional Reporting#
Report based on conditions:
walker SearchWalker {
has target: str;
can search with entry {
# Report only matching nodes
if here.name == self.target {
report {
"found": here,
"path": self.path,
"properties": here.to_dict()
};
}
# Continue search
visit [-->];
}
}
Report vs Return#
Key differences:
- report
: Accumulates values, continues execution
- return
: Exits current ability immediately
walker Finder {
can find with entry {
if here.is_target {
report here; # Add to results
return; # Stop searching this branch
}
visit [-->];
}
}
Integration with Object-Spatial#
Reports work seamlessly with graph traversal:
node DataNode {
has id: str;
has value: float;
has category: str;
}
walker Analyzer {
has category_filter: str;
can analyze with entry {
# Filter and report
if here.category == self.category_filter {
report {
"id": here.id,
"value": here.value,
"connections": len([-->])
};
}
# Traverse to connected nodes
visit [-->(?.category == self.category_filter)];
}
}
# Usage
with entry {
results = spawn Analyzer(category_filter="important") on root;
print(f"Found {len(results)} matching nodes");
}
Advanced Patterns#
Path Tracking#
walker PathTracker {
has path: list = [];
can track with entry {
self.path.append(here);
# Report complete paths at leaves
if len([-->]) == 0 {
report self.path.copy();
}
visit [-->];
}
can track with exit {
self.path.pop();
}
}
Hierarchical Aggregation#
walker TreeAggregator {
can aggregate with entry {
# Visit children first
visit [-->];
}
can aggregate with exit {
# Aggregate after processing children
child_sum = sum([child.value for child in [-->]]);
total = here.value + child_sum;
report {
"node": here,
"node_value": here.value,
"subtree_total": total
};
}
}
Best Practices#
- Report Meaningful Data: Include context with reported values
- Use Structured Reports: Return dictionaries for complex data
- Consider Memory: Large traversals with many reports can accumulate
- Report Early: Don't wait until the end if intermediate results matter
- Combine with Disengage: Use
disengage
after critical reports
Report statements are fundamental to the walker pattern in Jac, enabling elegant extraction of information from graph structures while maintaining clean separation between traversal logic and result collection.
Control statements#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Control statements provide essential flow control mechanisms for managing program execution within loops and conditional structures. These statements enable precise control over iteration and branching, complementing Jac's object-spatial features with traditional imperative programming constructs.
Basic Control Operations#
Jac supports fundamental control statements for loop management:
break
: Immediately exits the current loop and transfers control to the statement following the loop structure.
continue
: Skips the remainder of the current loop iteration and proceeds to the next iteration.
skip
: Data spatial equivalent for walker traversal control (covered in walker statements documentation).
Break Statement
The break
statement immediately terminates the innermost loop and transfers control to the statement following the loop:
Execution flow:
1. Loop begins with i = 0
2. Prints 0
, then 1
, then 2
3. When i = 3
, condition i > 2
becomes true
4. Prints "loop is stopped!!"
5. break
executes, immediately exiting the loop
6. Execution continues after the loop block
Continue Statement
The continue
statement skips the remainder of the current loop iteration and jumps to the next iteration:
Execution flow:
1. First iteration: j = "W"
2. Condition j == "W"
is true
3. continue
executes, skipping the print(j)
statement
4. Second iteration: j = "I"
5. Condition is false, print("I")
executes
6. Third iteration: j = "N"
7. Condition is false, print("N")
executes
Loop Integration
Control statements work with all Jac loop constructs:
For-In Loops
for item in collection {
if condition {
break; # Exit loop
}
if other_condition {
continue; # Skip to next item
}
# Process item
}
For-To-By Loops
for i=0 to i<10 by i+=1 {
if i % 2 == 0 {
continue; # Skip even numbers
}
if i > 7 {
break; # Stop when i exceeds 7
}
print(i); # Prints 1, 3, 5, 7
}
While Loops
while condition {
if exit_condition {
break; # Exit while loop
}
if skip_condition {
continue; # Skip to condition check
}
# Loop body
}
Nested Loop Behavior
Control statements affect only the innermost loop:
for i in range(3) {
for j in range(3) {
if j == 1 {
break; # Exits inner loop only
}
print(i, j);
}
print("Outer loop continues");
}
Output pattern:
- Inner loop breaks when j == 1
- Outer loop continues for all values of i
- Each outer iteration prints "Outer loop continues"
Conditional Integration
Control statements work seamlessly with Jac's conditional expressions:
Simple Conditions
Complex Conditions
for data in dataset {
if data.type == "error" and data.severity > threshold {
print("Critical error found");
break; # Stop processing on critical error
}
analyze(data);
}
Function and Method Context
Control statements can be used within functions and methods:
def process_list(items: list) -> list {
results = [];
for item in items {
if item < 0 {
continue; # Skip negative values
}
if item > 100 {
break; # Stop at first value over 100
}
results.append(item * 2);
}
return results;
}
Object-Spatial Integration
While control statements primarily affect traditional loops, they complement object-spatial operations:
walker Processor {
can process_nodes with `root entry {
for node in [-->] {
if node.should_skip {
continue; # Skip certain nodes
}
if node.stop_condition {
break; # Exit processing loop
}
node.process();
}
}
}
Error Handling Patterns
Control statements enable robust error handling:
Early Exit on Error
for operation in operations {
if operation.has_error() {
print("Error detected, stopping");
break;
}
operation.execute();
}
Skip Invalid Data
for record in data_records {
if not record.is_valid() {
continue; # Skip malformed records
}
process_record(record);
}
Performance Considerations
Control statements are optimized for efficiency:
Break Optimization - Immediately exits loop without further condition checking - Minimal overhead for early termination - Useful for search algorithms and error conditions
Continue Optimization - Jumps directly to next iteration - Skips unnecessary computation in current iteration - Efficient for filtering operations
Common Patterns
Search and Exit
Filter Processing
Batch Processing with Limits
processed = 0;
for item in large_dataset {
if processed >= batch_limit {
break;
}
process_item(item);
processed += 1;
}
Best Practices
- Clear Intent: Use control statements to make loop logic explicit
- Early Exit: Use
break
for efficiency when search conditions are met - Filtering: Use
continue
to skip invalid or unnecessary data - Limit Scope: Control statements affect only the immediate loop
- Readable Code: Combine with clear conditional logic for maintainability
Control statements in Jac provide essential building blocks for algorithmic logic, enabling developers to implement efficient loops with precise flow control. While Jac's object-spatial features offer novel traversal mechanisms, traditional control statements remain crucial for implementing conventional algorithms and handling edge cases in data processing workflows.
Object spatial Walker statements#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Walker statements control the movement and lifecycle of computational entities within topological structures. These statements implement the core data spatial paradigm where computation moves to data through controlled traversal of nodes and edges.
Visit Statement#
The visit statement directs a walker to traverse to specified locations within the topological structure:
Visit statements add destinations to the walker's traversal queue, enabling dynamic path construction during execution. The walker processes queued destinations sequentially, triggering entry and exit abilities at each location. When visiting edges, both the edge and its appropriate endpoint node are automatically queued to maintain proper traversal flow.
The optional edge filtering syntax allows walkers to traverse only specific edge types, enabling sophisticated graph navigation patterns. The else clause provides fallback behavior when traversal conditions are not met.
Ignore Statement#
The ignore statement excludes specific nodes or edges from traversal consideration:
This statement prevents walkers from visiting specified locations, effectively creating traversal filters that help optimize pathfinding and implement selective graph exploration strategies. Ignored locations remain in the graph structure but become invisible to the current walker's traversal logic.
Disengage Statement#
The disengage statement immediately terminates a walker's active traversal:
When executed, disengage clears the walker's traversal queue and transitions it back to inactive object state. The walker preserves all accumulated data and state from its traversal, making this information available for subsequent processing. This statement enables early termination patterns and conditional traversal completion.
Traversal Control Patterns#
These statements combine to enable sophisticated traversal algorithms:
walker PathFinder {
has target: str;
has visited: set[node] = set();
can search with entry {
# Mark current location as visited
self.visited.add(here);
# Check if target found
if (here.name == self.target) {
report here;
disengage;
}
# Continue to unvisited neighbors
unvisited = [-->] |> filter(|n| n not in self.visited);
if (unvisited) {
visit unvisited;
} else {
# Backtrack if no unvisited neighbors
disengage;
}
}
}
Walker statements embody the fundamental principle of mobile computation, enabling algorithmic behaviors to flow through data structures while maintaining clear separation between computational logic (walkers) and data storage (nodes and edges).
Visit statements#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Visit statements in Jac implement the fundamental object-spatial operation that enables walkers to traverse through node-edge topological structures. This statement embodies the core Object-Spatial Programming (OSP) paradigm of "computation moving to data" rather than the traditional approach of moving data to computation.
Theoretical Foundation
In OSP theory, the visit statement (\(\triangleright\)) allows walkers to move between nodes and edges in the topological structure, representing the dynamic traversal capability central to the paradigm. Walkers are autonomous computational entities that traverse node-edge structures, carrying state and behaviors that execute based on their current location.
Basic Visit Syntax
The basic syntax for visit statements follows this pattern:
Directional Visit Patterns
The example demonstrates directional traversal using arrow notation:
The [-->]
syntax represents traversal along outgoing edges from the current node. This pattern enables walkers to:
- Explore connected nodes: Move to nodes reachable via outgoing edges
- Follow topological paths: Traverse the graph structure according to connection patterns
- Implement search algorithms: Use systematic traversal to locate specific nodes or data
Queue Insertion Index Semantics
Visit statements support an advanced feature that controls traversal behavior through queue insertion indices:
visit :0:[-->]; // Insert at beginning (index 0)
visit :-1:[-->]; // Insert at end (index -1)
visit :2:[-->]; // Insert at index 2
visit :-3:[-->]; // Insert 3 positions from end
visit [-->]; // Default behavior
This syntax controls where new destinations are inserted into the walker's traversal queue:
:0:
- Insert at the beginning of the queue (index 0)- Results in depth-first style traversal
- Newly discovered nodes are visited immediately before previously queued nodes
-
The walker explores paths deeply before backtracking
-
:-1:
- Insert at the end of the queue (index -1) - Results in breadth-first style traversal
- Newly discovered nodes are visited after all currently queued nodes
-
The walker explores all nodes at the current level before moving deeper
-
Other positive indices (e.g.,
:1:
,:2:
,:3:
) - Insert at the specified position from the beginning
- Enables custom traversal ordering strategies
-
Useful for priority-based or weighted traversal algorithms
-
Other negative indices (e.g.,
:-2:
,:-3:
) - Insert at the specified position from the end
- Allows fine-grained control over queue ordering
-
Supports complex traversal patterns beyond simple depth/breadth-first
-
No index - Default queue insertion behavior
- Implementation-specific ordering
- Typically follows standard traversal semantics
Practical Example
Consider a walker that uses conditional queue insertion:
walker MyWalker {
can does with MyNode entry {
if here.val == 20 {
visit :0:[-->]; // Depth-first from this node
}
elif here.val == 30 {
visit :-1:[-->]; // Breadth-first from this node
}
else {
visit [-->]; // Default traversal
}
}
}
This demonstrates: - Dynamic traversal strategies: Different nodes can trigger different traversal behaviors - Fine-grained control: Precise specification of exploration patterns - Adaptive algorithms: Traversal strategy can change based on node properties or walker state
Traversal Queue Mechanics
When a walker executes a visit statement:
- Target identification: The walker identifies all nodes matching the visit pattern (e.g.,
[-->]
) - Queue insertion: New destinations are inserted at the specified index:
:0:
pushes to the front (stack-like behavior):-1:
appends to the end (queue-like behavior)- Next visit: The walker moves to the node at the front of its queue
- Continuation: Process repeats until the queue is empty or walker disengages
This queue-based approach enables sophisticated traversal patterns while maintaining the intuitive OSP programming model.
Conditional Traversal with Else Clauses
Visit statements support else clauses that execute when the primary visit target is unavailable:
- Fallback behavior: When
[-->]
finds no outgoing edges, the else block executes - Graceful handling: Provides alternative actions when traversal paths are exhausted
- Control flow: Enables complex navigation logic with built-in error handling
Walker Abilities and Visit Integration
The example shows a walker ability that automatically triggers visit behavior:
Key aspects:
- Implicit activation: The travel
ability triggers automatically when the walker enters a root node
- Context-sensitive execution: Behavior adapts based on the walker's current location
- Distributed computation: Logic executes at data locations rather than centralized functions
Node Response to Walker Visits
Nodes can define abilities that respond to walker visits:
This demonstrates: - Location-bound computation: Nodes contain computational abilities triggered by visitor arrival - Type-specific responses: Different behaviors for different walker types - Bidirectional interaction: Both walkers and nodes participate in computation
Traversal Lifecycle
The complete traversal process involves:
- Walker spawning:
root spawn Visitor()
activates the walker at the root node - Ability triggering: The walker's
travel
ability executes upon entry - Visit execution: The walker moves to connected nodes via
visit [-->]
- Node response: Each visited node's
speak
ability triggers - Fallback handling: If no outgoing edges exist, the else clause executes
- Termination:
disengage
removes the walker from active traversal
Object-Spatial Benefits
Visit statements enable several key advantages:
- Natural graph algorithms: Traversal logic maps directly to problem domain topology
- Decoupled computation: Algorithms separate from data structure implementation
- Context-aware processing: Computation adapts to local data and connection patterns
- Intuitive control flow: Navigation follows the natural structure of connected data
Common Patterns
Visit statements support various traversal patterns:
- Breadth-first exploration: Systematic traversal of all reachable nodes using visit :-1:[-->]
- Depth-first search: Following paths to their conclusion before backtracking using visit :0:[-->]
- Conditional navigation: Choosing paths based on node properties or walker state
- Cyclic traversal: Returning to previously visited nodes for iterative processing
- Hybrid strategies: Mixing depth-first and breadth-first based on node properties
The provided example demonstrates a simple breadth-first traversal where a walker visits all nodes connected to the root, printing a message at each location. This illustrates how visit statements transform graph traversal from complex algorithmic implementation to intuitive navigation through connected data structures.
Visit statements represent a fundamental shift in programming paradigms, enabling developers to express algorithms in terms of movement through data topologies rather than data manipulation through function calls.
Ignore statements#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Ignore statements provide a mechanism to exclude specific nodes or edges from walker traversal paths. This feature enables selective graph navigation by marking elements that should be skipped during traversal operations.
Syntax#
Purpose#
Ignore statements allow walkers to: - Skip specific nodes during traversal - Exclude edges from path consideration - Create filtered traversal patterns - Optimize navigation by avoiding irrelevant paths
Basic Usage#
walker Traverser {
can traverse with entry {
# Ignore specific nodes
ignore here.blocked_nodes;
# Visit all other connected nodes
visit [-->];
}
}
Ignoring Nodes#
Mark nodes to be skipped:
walker Searcher {
has visited: set = {};
can search with entry {
# Avoid revisiting nodes
if here in self.visited {
return;
}
self.visited.add(here);
# Ignore nodes marked as inactive
inactive = [-->(?.active == False)];
ignore inactive;
# Visit only active nodes
visit [-->];
}
}
Ignoring Edges#
Exclude specific connections:
walker PathFinder {
can find_path with entry {
# Ignore low-weight edges
weak_edges = [-->(?.weight < 0.5)];
ignore weak_edges;
# Traverse only strong connections
visit [-->];
}
}
Conditional Ignoring#
Dynamic exclusion based on conditions:
walker ConditionalTraverser {
has security_level: int;
can traverse with entry {
# Ignore nodes above security clearance
restricted = [];
for n in [-->] {
if n.required_level > self.security_level {
restricted.append(n);
}
}
ignore restricted;
# Visit accessible nodes
visit [-->];
}
}
Pattern-Based Ignoring#
Use type and property filters:
walker TypedExplorer {
can explore with entry {
# Ignore specific node types
ignore [-->(`BlockedType)];
# Ignore nodes matching pattern
ignore [-->(?.category == "excluded")];
# Visit remaining nodes
visit [-->];
}
}
Integration with Visit#
Combine ignore and visit for precise control:
walker SmartNavigator {
can navigate with entry {
all_neighbors = [-->];
# Categorize nodes
high_priority = [];
low_priority = [];
blocked = [];
for n in all_neighbors {
if n.priority > 0.8 {
high_priority.append(n);
} elif n.priority > 0.3 {
low_priority.append(n);
} else {
blocked.append(n);
}
}
# Ignore low-value nodes
ignore blocked;
# Visit high priority first
visit high_priority;
visit low_priority;
}
}
Temporary Ignoring#
Ignore within specific contexts:
walker ContextualWalker {
has ignore_list: list = [];
can process with entry {
# Temporarily ignore nodes
if here.is_checkpoint {
self.ignore_list = here.get_blocked_paths();
}
# Apply current ignore list
ignore self.ignore_list;
# Clear ignore list at boundaries
if here.is_boundary {
self.ignore_list = [];
}
visit [-->];
}
}
Performance Optimization#
Use ignore to prune search spaces:
walker EfficientSearcher {
has max_cost: float;
has current_cost: float = 0.0;
can search with entry {
# Update path cost
self.current_cost += here.cost;
# Ignore paths exceeding budget
expensive_paths = [];
for n in [-->] {
if self.current_cost + n.estimated_cost > self.max_cost {
expensive_paths.append(n);
}
}
ignore expensive_paths;
# Continue with viable paths
visit [-->];
# Restore cost on exit
self.current_cost -= here.cost;
}
}
Relationship with Graph Structure#
Ignore statements don't modify the graph:
walker Observer {
can observe with entry {
# Count all connections
total_edges = len([-->]);
# Ignore some nodes
ignore [-->(?.temporary)];
# Original graph unchanged
assert len([-->]) == total_edges;
# But traversal is filtered
visit [-->]; # Skips ignored nodes
}
}
Best Practices#
- Clear Criteria: Use explicit conditions for ignoring
- Document Reasons: Explain why nodes are ignored
- Consider Alternatives: Sometimes filtering in visit is clearer
- Reset State: Clear ignore lists when appropriate
- Performance: Ignore early to avoid unnecessary computation
Common Patterns#
Visited Set Pattern#
walker DepthFirst {
has visited: set = {};
can traverse with entry {
self.visited.add(here);
# Ignore already visited
ignore [n for n in [-->] if n in self.visited];
visit [-->];
}
}
Type-Based Filtering#
walker TypeFilter {
has allowed_types: list;
can filter with entry {
# Ignore non-matching types
for t in self.allowed_types {
ignore [-->(!`t)];
}
visit [-->];
}
}
Threshold-Based Pruning#
walker ThresholdWalker {
has min_score: float;
can walk with entry {
# Ignore low-scoring paths
ignore [-->(?.score < self.min_score)];
# Process high-scoring nodes
visit [-->];
}
}
Ignore statements provide essential control over traversal patterns, enabling efficient and targeted graph navigation while maintaining clean, readable code. They work in harmony with visit statements to create sophisticated traversal algorithms.
Disengage statements#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Disengage statements in Jac provide a mechanism for terminating walker traversal within the object-spatial topology. This statement enables walkers to exit their active traversal state and return to inactive object status, representing a controlled termination of the "computation moving to data" process that characterizes Object-Spatial Programming.
Theoretical Foundation
In OSP theory, the disengage statement allows a walker to immediately terminate its entire object-spatial traversal and return to an inactive object state. When executed, it sets the walker's location to inactive (L(w) ← ∅) and clears its traversal queue (Q_w ← []), effectively removing the walker from active participation in the distributed computational system.
Basic Disengage Syntax
The disengage statement uses simple syntax:
This statement can be executed from various contexts within the object-spatial execution environment.
Execution Contexts
Disengage statements can be called from multiple contexts:
From Walker Abilities Walkers can disengage themselves during their traversal:
walker Visitor {
can travel with `root entry {
visit [-->] else {
visit root;
# Walker disengages itself
}
}
}
From Node Abilities Nodes can disengage visiting walkers, as demonstrated in the example:
node item {
can speak with Visitor entry {
print("Hey There!!!");
disengage; # Node disengages the visiting walker
}
}
This showcases the bidirectional nature of object-spatial computation, where both walkers and the locations they visit can control the traversal process.
Execution Semantics
When a disengage statement executes:
- Immediate Termination: All remaining ability execution at the current location is immediately terminated
- Bypass Exit Processing: Any exit abilities for the current location type are bypassed
- Queue Clearing: The walker's traversal queue is completely cleared (Q_w ← [])
- Location Reset: The walker's location is set to inactive (L(w) ← ∅)
- State Transition: The walker transitions from an active participant in the distributed computational system to an inactive object
- Data Preservation: The walker retains all its properties and data accumulated during traversal
Comparison with Traditional Control Flow
The disengage statement is analogous to the break
statement in traditional loop constructs, but operates within the context of topological traversal rather than iterative control structures. While break
exits loops, disengage
exits the entire object-spatial execution context.
Use Cases
Disengage statements are commonly used for:
Early Termination - Target Found: Stopping traversal when a specific node or condition is discovered - Completion Criteria: Terminating when computational objectives are achieved - Error Conditions: Exiting traversal when invalid states or data are encountered
Resource Management - Traversal Limits: Preventing infinite or excessively long traversals - Performance Optimization: Stopping unnecessary exploration when results are obtained - Memory Conservation: Freeing walker resources when computation is complete
Algorithm Implementation - Search Termination: Ending search algorithms when targets are located - Conditional Processing: Stopping based on dynamic conditions discovered during traversal - State Machine Transitions: Exiting traversal phases in complex algorithmic processes
Lifecycle Integration
The example demonstrates how disengage integrates with the complete walker lifecycle:
- Creation and Spawning:
root spawn Visitor()
activates the walker - Traversal Execution: Walker moves through connected nodes via visit statements
- Node Interaction: Each visited node's ability executes upon walker arrival
- Controlled Termination: The node's
speak
ability callsdisengage
after processing - State Cleanup: Walker transitions back to inactive status with preserved data
Design Patterns
Visitor Pattern Termination The example shows a common pattern where nodes control visitor lifecycle: - Nodes perform their processing (printing a message) - Nodes then terminate the visitor's traversal - This enables data locations to control when computation should stop
Conditional Disengage Disengage can be combined with conditional logic:
Graceful vs. Immediate Termination Unlike error-based termination, disengage provides graceful termination that: - Preserves walker state and accumulated data - Maintains system integrity - Enables post-traversal analysis or processing
Relationship to Other Control Statements
Disengage complements other object-spatial control statements: - Visit: Adds destinations to walker traversal queue - Skip: Terminates processing at current location but continues traversal - Disengage: Terminates entire traversal and returns walker to inactive state
The disengage statement provides essential control over walker lifecycle management, enabling sophisticated algorithms that can terminate based on discovered conditions, computational completion, or resource constraints. It represents a key mechanism for managing the autonomous nature of walkers while maintaining programmatic control over the distributed computational process that characterizes Object-Spatial Programming.
Assignments#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Jac provides comprehensive assignment operations that extend Python's familiar syntax with enhanced type safety and explicit variable declaration capabilities. These assignment patterns support both traditional programming and object-spatial operations.
Basic Assignment Operations#
Standard assignment uses the =
operator to bind values to variables:
Jac supports chained assignments for assigning the same value to multiple variables:
Explicit Variable Declaration#
The let
keyword provides explicit variable declaration, enhancing code clarity and supporting static analysis:
Explicit declaration makes variable creation intent clear and helps distinguish between new variable creation and existing variable modification.
Typed Assignments#
Type annotations provide compile-time type checking and documentation:
let count: int = 0;
let ratio: float = 3.14159;
let items: list[str] = ["apple", "banana", "cherry"];
let config: dict[str, any] = {"debug": true, "timeout": 30};
Type annotations enable early error detection and improve code maintainability by making data types explicit.
Augmented Assignment Operators#
Augmented assignments combine operations with assignment for concise code:
Arithmetic Operations:
counter += 1; # Addition assignment
balance -= withdrawal; # Subtraction assignment
total *= factor; # Multiplication assignment
average /= count; # Division assignment
result //= divisor; # Floor division assignment
remainder %= modulus; # Modulo assignment
power **= exponent; # Exponentiation assignment
Bitwise Operations:
flags &= mask; # Bitwise AND assignment
options |= new_flag; # Bitwise OR assignment
data ^= encryption_key; # Bitwise XOR assignment
bits <<= shift_amount; # Left shift assignment
value >>= shift_count; # Right shift assignment
Matrix Operations:
Destructuring Assignment#
Jac supports destructuring assignment for tuples and collections:
Destructuring enables elegant extraction of values from complex data structures.
Object-Spatial Assignment Patterns#
Assignments work seamlessly with object-spatial constructs:
walker DataCollector {
has results: list = [];
can collect with entry {
# Assign from node data
let node_value = here.data;
let neighbors = [-->];
# Augmented assignment with spatial data
self.results += [node_value];
# Typed assignment with graph references
let connected_nodes: list[node] = neighbors;
# Conditional assignment based on spatial context
let next_target = neighbors[0] if neighbors else None;
if (next_target) {
visit next_target;
}
}
}
node ProcessingNode {
has data: dict;
has processed: bool = false;
can update_data with visitor entry {
# Assignment within node abilities
let new_data = visitor.get_processed_data();
self.data |= new_data; # Dictionary merge assignment
self.processed = true;
}
}
Assignment Expression Evaluation#
Jac evaluates assignment expressions with predictable semantics:
# Right-to-left evaluation for chained assignments
a = b = c = expensive_computation(); # computed once
# Left-to-right evaluation for augmented assignments
matrix[i][j] += calculate_delta(i, j); # index computed before operation
Type Inference and Validation#
The compiler performs type inference for untyped assignments while validating typed assignments:
let inferred = 42; # Inferred as int
let explicit: float = 42; # Explicit conversion to float
let validated: str = "text"; # Type validation at compile time
Assignment in Control Structures#
Assignments integrate with control flow constructs:
# Assignment in conditional expressions
result = value if (temp := get_temperature()) > threshold else default;
# Assignment in loop constructs
for item in items {
let processed = transform(item);
results.append(processed);
}
Assignment operations provide the foundation for variable management in Jac programs, supporting both traditional programming patterns and the unique requirements of object-spatial computation where variables may hold references to nodes, edges, and walker states.
Expressions#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Expressions in Jac form the computational backbone of the language, providing a rich hierarchy of operations that combine values, variables, and function calls into meaningful computations. Jac's expression system extends Python's familiar syntax while adding unique features for object-spatial programming and enhanced type safety.
Expression Hierarchy#
Jac expressions follow a well-defined precedence hierarchy:
- Conditional expressions: Ternary conditional operations
- Lambda expressions: Anonymous function definitions
- Concurrent expressions: Flow and wait operations
- Walrus assignments: Named expressions with
:=
- Pipe expressions: Forward and backward piping
- Bitwise operations: Bit manipulation operations
- Logical operations: Boolean logic and comparisons
- Arithmetic operations: Mathematical computations
- Connect expressions: Data spatial connections
- Atomic expressions: Basic values and references
Basic Expression Types#
42 # Integer literal
"hello world" # String literal
user_name # Variable reference
calculate(x, y) # Function call
result = value if condition else alternative; # Conditional expression
Object-Spatial Expression Integration#
Expressions integrate seamlessly with object-spatial constructs:
walker DataProcessor {
can analyze with entry {
neighbors = [-->];
connected_count = len(neighbors);
next_node = neighbors[0] if neighbors else None;
if connected_count > threshold {
visit neighbors.filter(lambda n: Node : n.is_active());
}
}
}
Type-Safe Expression Evaluation#
let count: int = items.length();
let ratio: float = total / count;
let is_valid: bool = (count > 0) and (ratio < 1.0);
Performance Considerations#
- Left-to-right evaluation for same precedence operations
- Short-circuit evaluation for logical operators
- Constant folding for literal expressions
- Type specialization for performance
Expressions provide the foundation for all computational operations in Jac, supporting both traditional programming patterns and object-spatial algorithms while maintaining type safety and performance optimization.
Concurrent expressions#
Code Example
Runnable Example in Jac and JacLib
import from time { sleep }
node A {
has val: int = 0;
can do with entry {
print("Started");
sleep(2);
print(visitor);
}
}
walker B {
has name: str;
}
def add(x: int, y: int) -> int {
print(x);
z = x + y;
sleep(2);
print(x);
return z;
}
with entry {
t1 = flow A() spawn B("Hi") ;
task1 = flow add(1, 10) ;
task2 = flow add(2, 11) ;
print("All are started");
res1 = wait task1 ;
res2 = wait task2 ;
print("All are done");
print(res1);
print(res2);
}
Jac Grammar Snippet
Description
Concurrent expressions enable parallel and asynchronous execution in Jac through the flow
and wait
modifiers. These constructs provide built-in concurrency support, allowing efficient parallel processing while maintaining clean, readable code.
Flow Modifier#
The flow
modifier initiates parallel execution of expressions:
# Execute operations in parallel
flow process_data(chunk1);
flow process_data(chunk2);
flow process_data(chunk3);
Wait Modifier#
The wait
modifier synchronizes parallel operations:
# Wait for specific operation
result = wait async_operation();
# Wait for multiple operations
wait all_tasks_complete();
Combined Usage#
Flow and wait work together for parallel patterns:
walker ParallelProcessor {
can process with entry {
# Start parallel operations
task1 = flow compute_heavy(here.data1);
task2 = flow compute_heavy(here.data2);
task3 = flow compute_heavy(here.data3);
# Wait for all results
result1 = wait task1;
result2 = wait task2;
result3 = wait task3;
# Combine results
here.result = combine(result1, result2, result3);
}
}
Parallel Walker Spawning#
Concurrent execution with walkers:
walker Analyzer {
can analyze with entry {
# Spawn walkers in parallel
flow spawn ChildWalker() on node1;
flow spawn ChildWalker() on node2;
flow spawn ChildWalker() on node3;
# Continue while children process
visit [-->];
}
}
Async Graph Operations#
Parallel graph traversal:
walker ParallelTraverser {
can traverse with entry {
children = [-->];
# Process children concurrently
tasks = [];
for child in children {
task = flow process_node(child);
tasks.append(task);
}
# Collect results
results = [];
for task in tasks {
result = wait task;
results.append(result);
}
report aggregate(results);
}
}
Error Handling#
Managing errors in concurrent operations:
can parallel_safe_process(items: list) -> list {
results = [];
errors = [];
# Start all tasks
tasks = [];
for item in items {
task = flow process_item(item);
tasks.append({"item": item, "task": task});
}
# Collect results with error handling
for t in tasks {
try {
result = wait t["task"];
results.append(result);
} except as e {
errors.append({"item": t["item"], "error": e});
}
}
if errors {
handle_errors(errors);
}
return results;
}
Concurrency Patterns#
Map-Reduce Pattern#
can map_reduce(data: list, mapper: func, reducer: func) -> any {
# Map phase - parallel
mapped = [];
for chunk in partition(data) {
task = flow mapper(chunk);
mapped.append(task);
}
# Collect mapped results
results = [];
for task in mapped {
result = wait task;
results.append(result);
}
# Reduce phase
return reducer(results);
}
Pipeline Pattern#
walker Pipeline {
can process with entry {
# Stage 1 - parallel data fetch
data1 = flow fetch_source1();
data2 = flow fetch_source2();
data3 = flow fetch_source3();
# Stage 2 - process as ready
processed1 = flow transform(wait data1);
processed2 = flow transform(wait data2);
processed3 = flow transform(wait data3);
# Stage 3 - aggregate
final = aggregate([
wait processed1,
wait processed2,
wait processed3
]);
report final;
}
}
Best Practices#
- Granularity: Balance task size for efficient parallelism
- Dependencies: Clearly manage data dependencies
- Error Propagation: Handle errors from parallel tasks
- Resource Limits: Consider system constraints
- Synchronization: Use wait appropriately to avoid race conditions
Integration with Object-Spatial#
Concurrent expressions enhance graph processing:
walker GraphAnalyzer {
can analyze with entry {
# Parallel subgraph analysis
subgraphs = partition_graph(here);
analyses = [];
for sg in subgraphs {
analysis = flow analyze_subgraph(sg);
analyses.append(analysis);
}
# Combine results
combined = {};
for a in analyses {
result = wait a;
merge_results(combined, result);
}
report combined;
}
}
Concurrent expressions provide powerful primitives for parallel execution in Jac, enabling efficient utilization of modern multi-core systems while maintaining the clarity and expressiveness of the language's object-spatial programming model.
Walrus assignments#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Walrus assignments in Jac provide named expressions using the :=
operator, enabling variable assignment within expressions. This feature allows for more concise code by combining assignment and expression evaluation in a single operation.
Basic Syntax#
# Assign and test in one operation
if (count := len(items)) > 0 {
print(f"Processing {count} items");
}
# Avoid repeated function calls
if (result := expensive_computation()) > threshold {
process(result); # Called only once
}
Common Use Cases#
Loop optimization:
List comprehensions:
Object-Spatial Integration#
walker GraphAnalyzer {
can analyze with entry {
if (neighbors := [-->]) and len(neighbors) > 2 {
for node in neighbors {
if (data := node.get_data()) and data.is_important() {
visit node;
}
}
}
}
}
Scope and Type Safety#
Variables created with walrus assignments: - Extend beyond the expression scope - Maintain Jac's type safety through inference - Follow standard scoping rules within functions
Best Practices#
- Use meaningful variable names
- Avoid overuse in complex expressions
- Combine with guards for conditional logic
- Prefer for performance optimization scenarios
Walrus assignments provide efficient code patterns while maintaining readability and type safety in both traditional and object-spatial programming contexts.
Lambda expressions#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Lambda expressions in Jac provide a concise way to create anonymous functions for functional programming patterns. These expressions enable the creation of small, single-expression functions without the overhead of formal function definitions, supporting Jac's functional programming capabilities while maintaining type safety through required parameter annotations.
Basic Lambda Syntax
Lambda expressions follow this general pattern:
Example Usage
The provided example demonstrates basic lambda creation and invocation:
Components breakdown:
- lambda
keyword: Introduces the lambda expression
- Parameter list: a: int, b: int
with required type annotations
- Colon separator: :
separates parameters from the expression body
- Expression body: b + a
defines the computation
- Assignment: Lambda stored in variable x
for later use
- Invocation: x(5, 4)
calls the lambda with arguments
Type Annotations
Unlike Python, Jac requires explicit type annotations for all lambda parameters:
Required Parameter Types
square = lambda x: int : x * x;
divide = lambda a: float, b: float : a / b;
concat = lambda s1: str, s2: str : s1 + s2;
Benefits of typed parameters: - Compile-time verification: Type mismatches caught early - Self-documentation: Parameter types clearly specified - IDE support: Better autocomplete and error detection - Performance optimization: Compiler can generate specialized code
Return Type Inference
Lambda return types are automatically inferred from the expression:
lambda x: int : x * 2 # Returns int
lambda x: float : x / 2.0 # Returns float
lambda x: str : x.upper() # Returns str
Functional Programming Patterns
Higher-Order Functions Lambdas integrate with higher-order functions:
numbers = [1, 2, 3, 4, 5];
squared = map(lambda x: int : x * x, numbers);
evens = filter(lambda x: int : x % 2 == 0, numbers);
Event Handlers
Sorting and Comparison
Expression Limitations
Lambda expressions are limited to single expressions:
Valid lambdas:
lambda x: int : x + 1 # Arithmetic
lambda x: int : x if x > 0 else -x # Conditional expression
lambda pair: tuple : pair[0] + pair[1] # Tuple access
lambda obj: MyClass : obj.method() # Method calls
Invalid lambdas (require full functions):
# Multiple statements not allowed
lambda x: int : {
y = x * 2;
return y + 1;
}
# Loops not allowed
lambda items: list : for item in items { process(item); }
Variable Capture and Closures
Lambdas can capture variables from their enclosing scope:
Closure behavior: - Lexical scoping: Lambdas capture variables from creation context - Late binding: Variable values resolved at call time - Immutable capture: Captured variables maintain their reference
Complex Lambda Examples
Multi-parameter operations:
distance = lambda x1: float, y1: float, x2: float, y2: float :
((x2 - x1) ** 2 + (y2 - y1) ** 2) ** 0.5;
Conditional logic:
String processing:
Object property access:
Integration with Collections
Lambdas work seamlessly with collection operations:
List comprehensions:
Dictionary operations:
Set operations:
Object-Spatial Integration
Lambdas can be used within object-spatial constructs:
Walker abilities:
walker Processor {
can process with `node entry {
transform = lambda data: str : data.upper();
here.data = transform(here.data);
}
}
Node filtering:
Edge processing:
Performance Considerations
Efficiency: - Lightweight creation: Minimal overhead for lambda instantiation - Optimized execution: Compiled to efficient function calls - Type specialization: Optimized based on parameter types
Memory usage: - Closure overhead: Captured variables increase memory footprint - Garbage collection: Lambdas cleaned up when references released - Optimization: Simple lambdas may be inlined by compiler
Common Patterns
Data transformation:
Validation:
Configuration:
Best Practices
- Keep it simple: Use lambdas for single expressions only
- Type clearly: Always provide explicit parameter types
- Descriptive names: Use meaningful variable names for stored lambdas
- Avoid complexity: Use regular functions for complex logic
- Consider readability: Don't sacrifice clarity for brevity
Comparison with Regular Functions
Lambda appropriate for: - Simple transformations - Event handlers - Inline operations - Functional programming patterns
Regular functions appropriate for: - Complex logic - Multiple statements - Detailed documentation needs - Reusable algorithms
Lambda expressions in Jac provide a powerful tool for functional programming while maintaining the language's emphasis on type safety and clarity. They enable concise expression of simple operations while integrating seamlessly with both traditional programming constructs and Jac's innovative object-spatial features.
Pipe expressions#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Pipe expressions enable functional-style data transformation through left-to-right value flow, eliminating deeply nested function calls and creating readable transformation chains. This feature is particularly powerful in object-spatial contexts where computation flows through topological structures.
Forward Pipe Operator (|>
)#
The forward pipe operator passes the result of the left expression as the first argument to the right expression:
# Traditional nested approach
result = process(transform(validate(data)));
# Pipe expression approach
result = data |> validate |> transform |> process;
This transformation improves readability by matching the natural left-to-right flow of data processing.
Basic Transformation Chains#
Pipe expressions excel at creating clear data processing pipelines:
# Numeric processing
processed_numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
|> filter(|x| x % 2 == 0)
|> map(|x| x * x)
|> sum;
# String manipulation
formatted_message = " Hello World "
|> strip
|> lower
|> replace(" ", "_")
|> capitalize;
Method Chaining Integration#
Pipes work seamlessly with object methods and archetype abilities:
obj DataProcessor {
def normalize(self, data: list) -> list {
max_val = max(data);
return [x / max_val for x in data];
}
def scale(self, data: list, factor: float) -> list {
return [x * factor for x in data];
}
def round_values(self, data: list) -> list {
return [round(x, 2) for x in data];
}
}
processor = DataProcessor();
result = raw_measurements
|> processor.normalize
|> processor.scale(100.0)
|> processor.round_values;
Object-Spatial Pipeline Integration#
Pipe expressions integrate naturally with object-spatial operations:
walker GraphAnalyzer {
can analyze_network with entry {
# Chain spatial operations
network_metrics = here
|> get_connected_nodes
|> filter_by_activity_level
|> calculate_centrality_scores
|> aggregate_statistics;
# Process node data through pipeline
processed_data = here.raw_data
|> clean_data
|> normalize_values
|> apply_transformations
|> validate_results;
# Update node with processed results
here.update_metrics(network_metrics);
here.set_processed_data(processed_data);
}
}
node DataNode {
has raw_data: list;
has processed_data: dict;
can get_connected_nodes(self) -> list {
return [-->] |> map(|edge| edge.target);
}
can update_metrics(self, metrics: dict) {
self.metrics = metrics;
}
}
Multi-line Pipeline Formatting#
Complex pipelines can span multiple lines for enhanced readability:
comprehensive_analysis = dataset
|> remove_outliers(threshold=2.5)
|> apply_feature_engineering(
features=["normalized", "scaled", "encoded"],
parameters={"scale_factor": 1.0}
)
|> split_train_test(ratio=0.8)
|> train_model(algorithm="random_forest")
|> evaluate_performance
|> generate_report;
Error-Safe Pipelines#
Pipe expressions can incorporate error handling and null-safe operations:
# Safe pipeline with optional operations
safe_result = potentially_null_input
|> validate_input
|> transform_safely
|> process_if_valid
|> default_on_error("fallback_value");
# Conditional pipeline execution
conditional_result = data
|> (|x| validate(x) if x.needs_validation else x)
|> (|x| expensive_operation(x) if x.size > threshold else x)
|> finalize_processing;
Graph Traversal Pipelines#
Pipe expressions excel in graph traversal and analysis scenarios:
walker PathOptimizer {
can find_optimal_path with entry {
optimal_route = here
|> get_all_possible_paths
|> filter_by_constraints(max_length=10, avoid_cycles=true)
|> calculate_path_costs
|> sort_by_efficiency
|> select_best_path;
# Execute the optimal path
visit optimal_route;
}
}
walker DataAggregator {
has collected_data: list = [];
can aggregate_from_network with entry {
aggregated_results = [-->*] # All reachable nodes
|> filter(|n| n.has_data())
|> map(|n| n.extract_data())
|> group_by_category
|> calculate_statistics
|> format_results;
self.collected_data.append(aggregated_results);
}
}
Performance Considerations#
Pipe expressions maintain efficiency through lazy evaluation and optimization:
# Efficient pipeline with early termination
result = large_dataset
|> filter(|item| item.is_relevant()) # Reduces dataset size early
|> take(100) # Limits processing to first 100
|> expensive_transformation # Only applied to filtered subset
|> final_aggregation;
Functional Composition Patterns#
Pipes enable elegant functional composition:
# Reusable transformation functions
def clean_and_validate(data: list) -> list {
return data |> remove_nulls |> validate_format |> normalize_encoding;
}
def analyze_and_report(data: list) -> dict {
return data |> statistical_analysis |> generate_insights |> format_report;
}
# Composed pipeline
final_report = raw_input
|> clean_and_validate
|> apply_business_rules
|> analyze_and_report;
Pipe expressions transform complex data processing into intuitive, maintainable code that naturally expresses the flow of computation through both traditional data structures and object-spatial topologies.
Pipe back expressions#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Pipe back expressions provide the reverse flow of pipe forward expressions, passing the result of the right expression as the last argument to the left expression. This operator enables different composition patterns that can be more natural for certain operations.
Backward Pipe Operator (<|
)#
The backward pipe operator flows data from right to left:
# Forward pipe - data flows left to right
result = data |> process |> format;
# Backward pipe - data flows right to left
result = format <| process <| data;
Use Cases#
Building Processing Pipelines#
# Define a processing pipeline right-to-left
processor = output_formatter
<| data_validator
<| input_parser;
# Apply the pipeline
result = processor(raw_input);
Partial Application Patterns#
# Create specialized functions
process_users = save_to_database
<| validate_user_data
<| normalize_user_fields;
# Use the composed function
process_users(user_list);
Combining with Forward Pipes#
Mix both operators for expressive code:
# Process data then apply formatting
final_result = formatter <| (
raw_data
|> clean
|> validate
|> transform
);
Graph Operations#
In object-spatial contexts:
walker Analyzer {
can analyze with entry {
# Right-to-left node filtering
targets = filter_reachable
<| sort_by_priority
<| [-->];
# Process results left-to-right
results = targets
|> extract_data
|> aggregate;
}
}
Function Composition#
Create reusable processing chains:
# Compose validators
validate_all = validate_format
<| validate_range
<| validate_type;
# Compose transformers
transform_all = final_format
<| apply_rules
<| normalize;
# Full pipeline
process = transform_all <| validate_all;
Precedence and Grouping#
Understanding operator precedence:
# Parentheses for clarity
result = (step3 <| step2) <| step1;
# Mixed operators need careful grouping
output = final_step <| (
input |> first_step |> second_step
);
Common Patterns#
Builder Pattern#
# Build configuration right-to-left
config = apply_overrides
<| set_defaults
<| parse_config_file
<| "config.json";
Middleware Chain#
# Web request processing
handle_request = send_response
<| process_business_logic
<| authenticate
<| parse_request;
Data Validation Pipeline#
# Validation stages
validate = report_errors
<| check_business_rules
<| verify_data_types
<| sanitize_input;
Best Practices#
- Use
<|
when: Building processing chains where later stages depend on earlier ones - Use
|>
when: Transforming data through sequential steps - Mix operators: When it improves readability
- Group with parentheses: To make precedence explicit
Comparison with Forward Pipe#
# Forward pipe - follows data flow
processed = data |> step1 |> step2 |> step3;
# Backward pipe - follows dependency order
processed = step3 <| step2 <| step1 <| data;
# Both achieve the same result
Pipe back expressions offer an alternative composition style that can be more intuitive when thinking about processing pipelines in terms of dependencies rather than data flow. They complement forward pipes to provide flexible, expressive ways to compose operations in Jac.
Bitwise expressions#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Bitwise expressions in Jac provide low-level bit manipulation operations that work directly on the binary representation of integer values. These operations are essential for systems programming, data encoding, optimization algorithms, and working with binary data formats.
Bitwise Operators
Jac supports all standard bitwise operators:
- AND (
&
): Performs bitwise AND operation between two operands - OR (
|
): Performs bitwise OR operation between two operands - XOR (
^
): Performs bitwise exclusive OR operation between two operands - NOT (
~
): Performs bitwise complement (NOT) operation on a single operand - Left Shift (
<<
): Shifts bits to the left by specified positions - Right Shift (
>>
): Shifts bits to the right by specified positions
Operator Semantics
Bitwise AND (&
)
- Returns 1 for each bit position where both operands have 1
- Example: 5 & 3
→ 101 & 011
= 001
= 1
Bitwise OR (|
)
- Returns 1 for each bit position where at least one operand has 1
- Example: 5 | 3
→ 101 | 011
= 111
= 7
Bitwise XOR (^
)
- Returns 1 for each bit position where operands differ
- Example: 5 ^ 3
→ 101 ^ 011
= 110
= 6
Bitwise NOT (~
)
- Inverts all bits (1 becomes 0, 0 becomes 1)
- Example: ~5
→ ~101
= ...11111010
(two's complement representation)
Left Shift (<<
)
- Shifts bits left, filling with zeros from the right
- Example: 5 << 1
→ 101 << 1
= 1010
= 10
Right Shift (>>
)
- Shifts bits right, behavior depends on sign (arithmetic shift)
- Example: 5 >> 1
→ 101 >> 1
= 10
= 2
Operator Precedence
Bitwise operators follow this precedence order (highest to lowest):
1. Bitwise NOT (~
)
2. Shift operators (<<
, >>
)
3. Bitwise AND (&
)
4. Bitwise XOR (^
)
5. Bitwise OR (|
)
Common Use Cases
Bitwise operations are commonly used for:
- Flags and masks: Setting, clearing, and checking individual bits
- Performance optimization: Fast multiplication/division by powers of 2 using shifts
- Data compression: Bit packing and unpacking
- Cryptography: XOR operations for encryption algorithms
- Hardware interfacing: Direct bit manipulation for embedded systems
The provided code example demonstrates all bitwise operators with operands 5 and 3, showing practical usage of each operation and their results.
Understanding bitwise expressions is crucial for low-level programming tasks and optimizations in Jac applications.
Logical and compare expressions#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Logical and comparison expressions in Jac provide the foundation for conditional logic, enabling programs to make decisions based on data relationships and boolean conditions with enhanced type safety and null-aware comparisons.
Comparison Operators#
a == b # Equal to
a != b # Not equal to
a < b # Less than
a <= b # Less than or equal to
a > b # Greater than
a >= b # Greater than or equal to
a is b # Identity comparison
a in collection # Membership test
Logical Operators#
condition1 and condition2 # Logical AND
condition1 or condition2 # Logical OR
not condition # Logical NOT
condition1 && condition2 # Alternative AND syntax
condition1 || condition2 # Alternative OR syntax
Short-Circuit Evaluation#
# Safe evaluation - second expression not evaluated if first is false
user and user.is_active()
# Efficient computation - avoids expensive call if cached
cached_result or expensive_computation()
Chained Comparisons#
# Range checking
if 0 <= value <= 100 {
print("Value is in valid range");
}
# Multiple conditions
if min_age <= user.age < max_age and user.is_verified() {
grant_access();
}
Object-Spatial Integration#
walker GraphValidator {
can validate with entry {
neighbors = [-->];
if here.value < 0 or here.value > 100 {
report f"Invalid value: {here.value}";
}
if len(neighbors) > 5 and here.is_hub() {
visit neighbors.filter(lambda n: Node : n.priority > 0);
}
}
}
Type-Safe Comparisons#
let count: int = 5;
let limit: int = 10;
if count < limit { # Type-compatible comparison
proceed();
}
Performance Considerations#
- Order cheaper conditions first for short-circuit efficiency
- Use parentheses for complex logical expressions
- Avoid repeated expensive function calls in conditions
Elvis Operator#
Jac offers a concise conditional expression using the Elvis operator ?:
. The
expression a ?: b
evaluates to a
if it is not None
, otherwise it yields
b
:
This operator provides the common ternary pattern without repeating the tested value, improving readability for simple defaulting logic.
Logical and comparison expressions provide the decision-making foundation for Jac programs, enabling sophisticated conditional logic while maintaining type safety and performance optimization.
Arithmetic expressions#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Jac supports a comprehensive set of arithmetic operations that follow the standard mathematical precedence rules. The arithmetic expression system in Jac is designed to be intuitive and consistent with mathematical conventions while maintaining compatibility with Python's arithmetic operations.
Basic Arithmetic Operators
The fundamental arithmetic operators available in Jac are:
- Addition (
+
): Adds two operands - Subtraction (
-
): Subtracts the right operand from the left operand - Multiplication (
*
): Multiplies two operands - Division (
/
): Performs floating-point division - Floor Division (
//
): Performs division and returns the floor of the result - Modulo (
%
): Returns the remainder of division - Exponentiation (
**
): Raises the left operand to the power of the right operand
Operator Precedence
Jac follows the standard mathematical order of operations (PEMDAS/BODMAS):
- Parentheses
()
- highest precedence - Exponentiation
**
- Unary plus/minus
+x
,-x
- Multiplication
*
, Division/
, Floor Division//
, Modulo%
- Addition
+
, Subtraction-
- lowest precedence
Expression Combinations
Complex arithmetic expressions can be constructed by combining multiple operators and operands. Parentheses can be used to override the default precedence and create more complex calculations.
The provided code example demonstrates all basic arithmetic operations including multiplication (7 * 2
), division (15 / 3
), floor division (15 // 3
), modulo (17 % 5
), exponentiation (2 ** 3
), and a combination expression with parentheses to control evaluation order ((9 + 2) * 9 - 2
).
These arithmetic expressions form the foundation for mathematical computations in Jac programs and can be used in variable assignments, function arguments, and conditional statements.
Connect expressions#
Code Example
Runnable Example in Jac and JacLib
node node_a {
has value: int;
}
walker Creator {
can create with `root entry;
can travel with `root | node_a entry;
}
edge MyEdge {
has val: int = 5;
}
impl Creator.create {
end = here;
for i=0 to i<7 by i+=1 {
if i % 2 == 0 {
end ++> (end := node_a(value=i));
} else {
end +>:MyEdge:val=i:+> (end := node_a(value=i + 10));
}
}
}
impl Creator.travel {
for i in [->:MyEdge:val <= 6:->] {
print(i.value);
}
visit [-->];
}
with entry :__main__ {
root spawn Creator();
}
Jac Grammar Snippet
Description
Connect expressions in Jac provide the fundamental mechanism for creating topological relationships between nodes, implementing the edge creation and management aspects of Object-Spatial Programming. These expressions enable the construction of graph structures where computation can flow through connected data locations.
Theoretical Foundation
In OSP theory, edges are first-class entities that represent directed relationships between nodes, encoding both the topology of connections and the semantics of those relationships. Connect expressions create these edge instances, establishing the pathways through which walkers can traverse and enabling the "computation moving to data" paradigm.
Basic Connection Syntax
Simple Connections The simplest form creates basic edges between nodes:
This creates a directed edge from the source node to the destination node, enabling walker traversal from source to destination.
Typed Edge Connections More sophisticated connections can specify edge types and properties:
This syntax allows for:
- Edge typing: Specifying the class of edge to create (EdgeType
)
- Property assignment: Setting initial values for edge properties (property=value
)
- Semantic relationships: Encoding meaning into the connection itself
Edge as First-Class Objects
Edges in Jac are not merely references but full-fledged objects with their own properties and behaviors:
This defines an edge class with:
- State: Properties that can store data (val: int
)
- Default values: Initial property assignments (= 5
)
- Type identity: Distinguished from other edge types
Dynamic Connection Creation
The example demonstrates dynamic topology construction within walker abilities:
impl Creator.create {
end = here;
for i=0 to i<7 by i+=1 {
if i % 2 == 0 {
end ++> (end := node_a(value=i));
} else {
end +>:MyEdge:val=i:+> (end := node_a(value=i + 10));
}
}
}
Key aspects:
- Contextual reference: here
refers to the walker's current location
- Sequential construction: Building connected chains of nodes dynamically
- Conditional topology: Using different connection types based on conditions
- Property parameterization: Setting edge properties based on runtime values (val=i
)
Connection Patterns
Chain Building Creating linear sequences of connected nodes:
This pattern:
- Connects the current end
node to a newly created node
- Updates end
to reference the new node for the next iteration
- Builds a chain topology suitable for sequential processing
Typed Connections with Properties Creating semantically rich connections:
This pattern:
- Creates edges of specific type (MyEdge
)
- Assigns properties during creation (val=i
)
- Enables edge-based filtering and processing in traversal
Edge Traversal and Filtering
Connect expressions enable sophisticated traversal patterns through edge filtering:
This demonstrates:
- Edge-type filtering: Only traverse MyEdge
connections
- Property-based selection: Filter edges where val <= 6
- Traversal integration: Iterate over filtered edge destinations
- Data access: Access properties of connected nodes (i.value
)
Bidirectional vs. Directional Connections
Jac supports various connection directionalities:
- Outgoing: ++>
creates edges from source to destination
- Incoming: <++
creates edges from destination to source
- Bidirectional: <++>
creates edges in both directions
Connection in Object-Spatial Context
Connect expressions integrate seamlessly with walker traversal:
- Topology Construction: Walkers can build the graph structure they will later traverse
- Dynamic Adaptation: Connections can be created based on discovered data or conditions
- Typed Relationships: Different edge types enable specialized traversal behaviors
- Property-Rich Edges: Edge properties provide context for traversal decisions
Lifecycle and Memory Management
Connected structures follow OSP lifecycle rules: - Node dependency: Edges automatically deleted when endpoint nodes are deleted - Referential integrity: Prevents dangling edge references - Dynamic modification: Connections can be created and destroyed during execution
Use Cases
Connect expressions enable various topological patterns:
Graph Construction - Social networks: Users connected by relationship types (friend, follower, etc.) - Workflow systems: Tasks connected by dependency relationships - State machines: States connected by transition conditions
Algorithm Implementation - Search trees: Building searchable hierarchical structures - Path planning: Creating route networks with weighted connections - Data pipelines: Connecting processing stages with typed flows
Real-World Modeling - Transportation networks: Locations connected by route types (road, rail, air) - Organizational structures: Entities connected by reporting relationships - Knowledge graphs: Concepts connected by semantic relationships
Performance Considerations
Connect expressions in Jac are designed for efficiency: - Incremental construction: Build topology as needed rather than pre-allocating - Type-specific optimization: Edge types enable specialized storage and traversal - Property indexing: Edge properties can be indexed for fast filtering - Memory locality: Related nodes and edges can be co-located for cache efficiency
The example demonstrates a complete pattern where a walker constructs a mixed topology using both simple and typed connections, then traverses the structure using edge filtering to process specific subsets of the data. This showcases how connect expressions enable both the construction and utilization phases of object-spatial programming, creating rich topological structures that support sophisticated computational patterns.
Connect expressions represent a fundamental departure from traditional data structure approaches, enabling developers to construct and modify graph topologies dynamically while maintaining type safety and semantic clarity through edge typing and property systems.
Atomic expressions#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Atomic expressions in Jac represent the most fundamental and indivisible units of expression evaluation. They serve as building blocks for more complex expressions and include literals, identifiers, and other primary expression forms.
Atomic Pipe Forward Expressions
The example demonstrates atomic pipe forward operations using the :>
operator, which enables a functional programming style by passing values through a chain of operations:
This expression takes the string literal "Hello world!"
and pipes it forward to the print
function, equivalent to calling print("Hello world!")
.
Chained Atomic Operations
Atomic expressions can be chained together for more complex operations:
This chains multiple operations:
1. Start with the string "Welcome"
2. Pipe it to type
function to get the type information
3. Pipe the result to print
to display it
Benefits of Atomic Pipe Expressions
- Readability: Left-to-right reading flow that matches natural language
- Composition: Easy chaining of operations without nested function calls
- Functional style: Enables pipeline-based programming patterns
- Clarity: Makes data flow explicit and easy to follow
Comparison with Traditional Syntax
Traditional nested function calls:
Atomic pipe forward style:
The pipe forward syntax eliminates the need to read expressions from inside-out, making code more intuitive and maintainable.
Atomic expressions form the foundation of Jac's expression system, enabling both traditional and functional programming paradigms while maintaining clear, readable code structure.
Atomic pipe back expressions#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Atomic pipe back expressions in Jac provide an alternative directional flow for data processing using the <:
operator. This feature complements the pipe forward operator (:>
) by enabling right-to-left data flow, offering flexibility in expression composition and readability.
Atomic Pipe Back Syntax
The pipe back operator <:
takes data from the right side and passes it to the function or expression on the left side:
This expression takes the string "Hello world!"
and pipes it back to the print
function, equivalent to calling print("Hello world!")
.
Mixed Directional Piping
Jac allows combining pipe forward (:>
) and pipe back (<:
) operators in the same expression for flexible data flow:
This complex expression demonstrates:
1. a + b
- concatenates two lists
2. :> len
- pipes the result forward to len
function
3. len <:
- pipes the length result back to another len
function call
Comparison of Pipe Directions
Pipe Forward (:>
) - Left to right flow:
Pipe Back (<:
) - Right to left flow:
Mixed Flow - Combining directions:
Use Cases for Pipe Back
Pipe back expressions are particularly useful when:
- Function-first thinking: When you want to emphasize the operation before the data
- Complex compositions: Building expressions that read more naturally with mixed flow
- Code organization: Structuring expressions to match logical thinking patterns
- Readability preferences: Some algorithms express more clearly with backward flow
Expression Evaluation
Despite the directional syntax, evaluation follows standard precedence rules. The pipe operators provide syntactic convenience while maintaining logical evaluation order.
Benefits
- Flexibility: Choose the most readable direction for data flow
- Composition: Mix directions for optimal expression clarity
- Expressiveness: Match syntax to problem domain thinking patterns
- Consistency: Maintain functional programming patterns with directional choice
Atomic pipe back expressions enhance Jac's functional programming capabilities by providing bidirectional data flow options that improve code readability and expressiveness.
Object spatial spawn expressions#
Code Example
Runnable Example in Jac and JacLib
walker Adder {
can do with `root entry;
}
node node_a {
has x: int = 0,
y: int = 0;
can add with Adder entry;
}
impl Adder.do {
here ++> node_a();
visit [-->];
}
impl node_a.add {
self.x = 550;
self.y = 450;
print(int(self.x) + int(self.y));
}
with entry {
# spawn will iniiate the walker Adder from root node
Adder() spawn root;
}
Jac Grammar Snippet
Description
Data spatial spawn expressions in Jac implement the fundamental mechanism for activating walkers within the topological structure, transitioning them from inactive objects to active participants in the distributed computational system. This operation embodies the initialization phase of the "computation moving to data" paradigm that characterizes Data Spatial Programming.
Theoretical Foundation
In DSP theory, the spawn operator (⇒) activates a walker within the topological structure by placing it at a specified node, edge, or path. This operation transitions the walker from a standard object state to an active data spatial entity within the graph G, updating the location mapping L and potentially initializing the walker's traversal queue Q_w.
Basic Spawn Syntax
Jac provides flexible syntax for spawn expressions:
Walker-First Syntax
Location-First Syntax
Both forms achieve the same result, allowing developers to choose the syntax that best fits their code organization and readability preferences.
Spawn Expression Types
Direct Node Spawning The example demonstrates spawning a walker directly on a node:
This syntax:
- Creates a new Adder
walker instance (Adder()
)
- Activates it at the root
node location
- Transitions the walker from inactive to active state
- Sets the walker's location mapping: L(walker) = root
- Initializes an empty traversal queue: Q_w = []
- Executes abilities only on the spawned node
Edge Spawning Walkers can also be spawned directly on edges:
When spawning on an edge: - Walker is activated at the edge location - Automatically queues both the edge and its target node - Sets the walker's location mapping: L(walker) = edge - Initializes traversal queue with target node: Q_w = [target_node] - Executes abilities on both the edge and the connected node
This automatic queueing behavior ensures that edge-spawned walkers process both the relationship (edge) and the destination (node), enabling complete traversal of the topological structure.
Walker Lifecycle and Activation
The spawn operation transforms a walker through several phases:
Pre-Spawn State
Before spawning: - Walker exists as a standard object - Abilities are defined but inactive - No location context or traversal queue - Cannot participate in data spatial operations
Spawn Activation
When Adder() spawn root
executes:
1. Location Assignment: Walker is positioned at the root node
2. Context Activation: Walker gains access to spatial references (here
, self
)
3. Ability Triggering: Entry abilities for the spawn location execute
4. Queue Initialization: Traversal queue is prepared for future visits
Post-Spawn Execution After activation, the walker's abilities execute in the established order: 1. Location entry abilities: Node's abilities for the arriving walker type 2. Walker entry abilities: Walker's abilities for the current location type
Contextual References in Spawned Walkers
Once spawned, walkers gain access to spatial context:
impl Adder.do {
here ++> node_a(); # 'here' refers to current location (root)
visit [-->]; # Navigate to connected nodes
}
Key contextual references:
- here
: References the walker's current location (the spawn point initially)
- self
: References the walker instance itself
- Spatial operations: Connect expressions and visit statements become available
Interaction with Node Abilities
Spawned walkers trigger location-bound computation:
node node_a {
can add with Adder entry; # Responds to Adder walker visits
}
impl node_a.add {
self.x = 550; # Node modifies its own state
self.y = 450; # Access to node properties via 'self'
print(int(self.x) + int(self.y)); # Computation at data location
}
This demonstrates: - Bidirectional activation: Walker spawning triggers node responses - Location-bound computation: Nodes contain computational abilities - State modification: Both walkers and nodes can modify state during interaction
Spawn Timing and Execution Flow
The execution sequence differs based on spawn location:
Node Spawn Sequence:
1. Spawn Expression: Adder() spawn root
activates the walker
2. Walker Positioning: Walker is placed at root node
3. Entry Ability Execution:
- Root node's abilities for Adder walkers (if any)
- Adder walker's abilities for root node type
4. Topology Construction: Walker creates connections (here ++> node_a()
)
5. Traversal Initiation: Walker visits connected nodes (visit [-->]
)
6. Node Interaction: Visited nodes execute their abilities for the Adder walker
7. Computational Completion: Process continues until walker queue is exhausted
Edge Spawn Sequence:
1. Spawn Expression: Walker() spawn edge_instance
activates on edge
2. Walker Positioning: Walker is placed at edge
3. Automatic Queueing: Target node is automatically added to walker's queue
4. Edge Ability Execution:
- Edge's abilities for the walker type
- Walker's abilities for the edge type
5. Automatic Node Visit: Walker automatically visits the queued target node
6. Node Ability Execution:
- Target node's abilities for the walker type
- Walker's abilities for the node type
7. Continued Traversal: Walker proceeds based on visit statements
The key difference: edge spawning ensures both edge and node processing, while node spawning processes only the node unless explicitly visiting edges.
Spawn Patterns and Use Cases
Initialization Patterns Spawn expressions commonly initialize computational processes: - Algorithm activation: Starting search, traversal, or analysis algorithms - System initialization: Activating monitoring or management walkers - Event triggering: Spawning responsive walkers based on system events
Multiple Walker Scenarios Systems may spawn multiple walkers:
Each walker operates independently with its own traversal queue and state.
Conditional Spawning Spawn operations can be conditional:
Spawn Location Flexibility
Spawn expressions support various targets with distinct behaviors:
Node Spawning: - Behavior: Walker executes abilities only on the spawned node - Queue State: Starts with empty queue unless walker adds visits - Use Case: Starting point for graph exploration, node-centric processing
Edge Spawning: - Behavior: Walker automatically processes edge AND target node - Queue State: Target node automatically queued after edge processing - Use Case: Relationship analysis, edge-weight calculations, path following
walker spawn edge_ref; # Process edge and its target
walker spawn connection; # Analyze connection and destination
Key Behavioral Difference: - Node spawn: Single location processing - Edge spawn: Dual location processing (edge + automatic node visit)
This distinction is crucial for algorithm design:
# Node-centric algorithm
DataProcessor() spawn data_node; # Process node data only
# Edge-centric algorithm
PathAnalyzer() spawn path_edge; # Analyze path AND destination
Error Handling and Constraints
Spawn expressions have important constraints: - Walker state: Can only spawn inactive walkers (not already active) - Location validity: Spawn targets must be valid nodes, edges, or paths - Type compatibility: Walker and location types must support interaction
Performance Considerations
Spawn expressions are designed for efficiency: - Lazy activation: Walkers only consume resources when active - Context switching: Minimal overhead for walker state transitions - Memory locality: Spawned walkers can exploit data locality at spawn points
Integration with Traditional Programming
Spawn expressions bridge DSP and traditional programming: - Method integration: Can be called from regular methods and functions - Conditional logic: Work with standard control flow constructs - Data preparation: Can follow traditional data initialization patterns
The example demonstrates a complete spawn-to-computation cycle where a walker is spawned, builds topology, traverses to connected nodes, and triggers location-bound computation. This showcases how spawn expressions initialize the distributed computational process that characterizes Data Spatial Programming, transforming passive objects into active participants in a topologically-aware computational system.
Comprehensive Example: Node vs Edge Spawning
edge Connection {
has weight: float;
can process with AnalysisWalker entry {
print(f"Processing edge with weight: {self.weight}");
}
}
node DataPoint {
has value: int;
can analyze with AnalysisWalker entry {
print(f"Analyzing node with value: {self.value}");
}
}
walker AnalysisWalker {
can traverse with DataPoint entry {
# Default behavior: visit nodes
print("Visiting connected nodes:");
visit [-->]; # Only visits nodes
# Explicit edge traversal
print("Visiting edges and their nodes:");
visit [edge -->]; # Visits edges AND nodes
}
}
with entry {
# Build topology
n1 = DataPoint(value=10);
n2 = DataPoint(value=20);
n3 = DataPoint(value=30);
edge1 = n1 +>:Connection(weight=0.5):+> n2;
edge2 = n2 +>:Connection(weight=0.8):+> n3;
# Node spawn - processes only the starting node
print("=== Node Spawn ===");
AnalysisWalker() spawn n1;
# Edge spawn - processes edge AND automatically visits target
print("=== Edge Spawn ===");
AnalysisWalker() spawn edge1;
}
Output demonstrates the behavioral difference: - Node spawn: Starts at n1, processes node abilities only - Edge spawn: Starts at edge1, processes edge abilities, then automatically visits n2
This example illustrates how spawn location affects the initial computational flow and how edge references ([edge -->]
) enable explicit edge processing during traversal.
Spawn expressions represent the activation gateway between traditional object-oriented programming and data spatial computation, enabling the transition from static object interactions to dynamic, topology-driven computational flows.
Unpack expressions#
Code Example
Runnable Example in Jac and JacLib
def combine_via_func(a: int, b: int, c: int, d: int) -> int {
return a + b + c + d;
}
with entry {
first_list = [1, 2, 3, 4, 5];
second_list = [5, 8, 7, 6, 9];
combined_list = [*first_list, *second_list];
print(combined_list);
# Original dictionary
first_dict = {'a':1, 'b':2 };
# Another dictionary
second_dict = {'c':3, 'd':4 };
# Combining dictionaries using dictionary unpacking
combined_dict = {**first_dict, **second_dict };
# Printing the combined dictionary
print(combine_via_func(**combined_dict));
print(combine_via_func(**first_dict, **second_dict));
}
Jac Grammar Snippet
Description
Unpack expressions enable the expansion of iterables and mappings into their constituent elements using the *
and **
operators. Jac follows Python's unpacking semantics while integrating seamlessly with pipe operations and object-spatial programming constructs.
Iterable Unpacking#
The single asterisk (*
) operator unpacks iterables into individual elements:
first = [1, 2, 3];
second = [4, 5];
combined = [*first, *second]; # [1, 2, 3, 4, 5]
coords = (3, 4);
point3d = (*coords, 5); # (3, 4, 5)
Unpacking preserves evaluation order, ensuring predictable behavior when side effects are involved.
Mapping Unpacking#
The double asterisk (**
) operator unpacks mappings into key-value pairs:
base = {"a": 1, "b": 2};
extend = {"b": 99, "c": 3};
merged = {**base, **extend}; # {"a": 1, "b": 99, "c": 3}
When duplicate keys exist, later values override earlier ones, following Python's precedence rules.
Function Call Unpacking#
Unpacking integrates with function calls and pipe operations:
def process_data(x: int, y: int, z: int) -> int {
return x + y + z;
}
# Traditional call unpacking
args = [1, 2, 3];
result = process_data(*args);
# Pipe operation with unpacking
kwargs = {"x": 1, "y": 2, "z": 3};
result = kwargs |> process_data;
Mixed Argument Patterns#
Unpacking can be combined with explicit arguments in flexible patterns:
def complex_function(a, b, c=10, d=20) {
return a + b + c + d;
}
# Mixed positional and keyword unpacking
positional = [1, 2];
keywords = {"d": 30};
result = complex_function(*positional, c=15, **keywords);
Integration with Object-Spatial Operations#
Unpacking works seamlessly with object-spatial constructs:
walker DataCollector {
has collected_data: list = [];
can gather with entry {
# Unpack node data into processing function
node_values = here.get_values();
processed = process_batch(*node_values);
# Collect results using unpacking
self.collected_data = [*self.collected_data, *processed];
}
}
node DataNode {
has config: dict;
can configure with visitor entry {
# Unpack configuration into visitor
visitor.update_config(**self.config);
}
}
Type Safety and Validation#
Unpacking operations include runtime type checking:
*
requires iterable objects (lists, tuples, sets, etc.)**
requires mapping objects with string keys- Type mismatches raise
TypeError
at runtime
Performance Considerations#
Unpacking creates new collections rather than sharing references, ensuring data isolation but requiring consideration of memory usage in performance-critical applications. The compiler optimizes common unpacking patterns to minimize overhead.
Unpack expressions provide essential functionality for flexible data manipulation while maintaining the clean, expressive syntax that characterizes Jac's approach to both traditional programming and object-spatial operations.
References (unused)#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
The "References (unused)" section in Jac's grammar represents reference patterns that are currently defined but not actively utilized in the language implementation. This section documents these unused reference constructs for completeness.
Current Status#
The grammar defines a ref
rule that is currently bypassed:
# Grammar definition (unused):
# ref: BW_AND? pipe_call
# Current implementation uses pipe_call directly
Potential Reference Syntax#
If implemented, references could support:
# Hypothetical reference syntax (not implemented)
let value = 42;
let ref_to_value = &value; # Reference to variable
let func_ref = &function; # Reference to function
Current Alternatives#
Jac handles similar needs through existing mechanisms:
Direct access:
Function objects:
can processor(data: list) -> dict {
return {"processed": data};
}
let func = processor; # Functions are first-class objects
result = func(my_data);
Object-Spatial Context#
Reference-like behavior is achieved through spatial navigation:
walker DataProcessor {
can process with entry {
here.value = process(here.value); # Direct node access
visit [-->]; # Direct navigation
}
}
Future Considerations#
The unused reference syntax may support future enhancements:
- Performance optimization for large data structures
- Advanced memory management
- Enhanced object-spatial operations
- Better interoperability with systems programming
Documentation Purpose#
This documentation acknowledges unused grammar constructs while explaining current alternatives and potential future development directions.
Object spatial calls#
Code Example
Runnable Example in Jac and JacLib
walker Creator {
can func2 with `root entry;
}
node node_1 {
has val: int;
can func_1 with Creator entry;
}
impl node_1.func_1 {
print("visiting ", self);
visit [-->];
}
impl Creator.func2 {
end = here;
for i=0 to i<5 by i+=1 {
end ++> (end := node_1(val=i + 1));
}
visit [-->];
}
with entry {
root spawn :> Creator;
root spawn |> Creator;
}
Jac Grammar Snippet
Description
Data spatial calls represent specialized operators that enable computation to move through topological structures rather than data moving to computation. These operators fundamentally invert traditional programming paradigms by activating computational entities within graph structures and enabling fluid data transformations.
Spawn Operator (spawn
)#
The spawn operator activates a walker within the topological structure, transitioning it from an inactive object to an active computational entity positioned at a specific location:
walker_instance spawn node_location;
walker_instance spawn edge_location;
walker_instance spawn path_collection;
When spawning occurs, the walker transitions from standard object state to active data spatial participant. The spawn operation places the walker at the specified location and triggers all relevant entry abilities, initiating the distributed computation model where both the walker and the location can respond to the interaction.
Pipe Operators#
Jac provides two pipe operators that enable functional-style data flow and method chaining:
Standard Pipe Forward (|>
): Enables left-to-right data flow with normal operator precedence, allowing values to flow through transformation chains without nested function calls.
Atomic Pipe Forward (:>
): Provides higher precedence piping for tighter binding in complex expressions, ensuring predictable evaluation order in sophisticated data transformations.
# Standard piping for data transformation
data |> normalize |> validate |> process;
# Atomic piping for method chaining
node :> get_neighbors :> filter_by_type :> collect;
Asynchronous Operations#
The await
operator synchronizes with asynchronous walker operations and concurrent graph traversals, ensuring proper execution ordering when walkers operate in parallel or when graph operations require coordination across distributed computational entities.
Integration with Data Spatial Model#
These operators work seamlessly within the data spatial programming paradigm:
walker GraphProcessor {
can analyze with entry {
# Spawn child walkers on filtered paths
child_walker spawn (here --> [node::has_data]);
# Transform data using pipes
result = here.data |> clean :> analyze |> summarize;
# Continue traversal based on results
if (result.score > threshold) {
visit here.neighbors;
}
}
}
Data spatial calls embody the core principle of computation moving to data, enabling walkers to activate distributed computational behaviors throughout the topological structure while maintaining clean, expressive syntax for complex graph operations.
Subscripted and dotted expressions#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Jac provides comprehensive data access mechanisms through attribute access and subscript operations that extend Python's familiar syntax with additional conveniences for pipe operations and null-safe access patterns.
Attribute Access#
Standard dot notation provides access to object attributes and methods:
Jac extends attribute access with directional dot operators that integrate with pipe expressions:
Operator | Syntax | Purpose |
---|---|---|
. |
obj.attr |
Standard attribute access |
.> |
obj.>method |
Forward piping attribute access |
<. |
obj<.method |
Backward piping attribute access |
The directional operators provide syntactic sugar for pipe operations, enabling more fluid expression chaining.
Null-Safe Access#
The optional access operator (?
) provides null-safe attribute and method access:
This operator short-circuits the entire access chain when encountering null values, preventing runtime errors in complex object hierarchies.
Subscript Operations#
Array-style indexing follows Python conventions with support for negative indices and slice operations:
letters = ["a", "b", "c", "d"];
print(letters[0]); # "a"
print(letters[1:3]); # ["b", "c"]
print(letters[-1]); # "d"
print(letters[::2]); # ["a", "c"] - every second element
Subscript operations support the full range of Python slicing syntax including start, stop, and step parameters.
Chained Access Patterns#
Attribute and subscript operations can be freely combined to access nested data structures:
node DataContainer {
has metadata: dict = {"values": [1, 2, 3], "config": {"debug": true}};
}
container = DataContainer();
value = container.metadata["values"][2]; # 3
debug_mode = container.metadata["config"]["debug"]; # true
Null-Safe Subscripting#
Null-safe access extends to subscript operations:
This pattern is particularly useful when working with optional configuration data or API responses with variable structure.
Integration with Object-Spatial Constructs#
Access operations work seamlessly with object-spatial programming elements:
walker DataInspector {
can analyze with entry {
# Safe access to node properties
node_type = here?.node_type;
data_size = here?.data?.["size"];
# Process based on available data
if (node_type == "processing" and data_size > threshold) {
visit here.high_priority_neighbors;
}
}
}
node ProcessingNode {
has data: dict;
has node_type: str = "processing";
has high_priority_neighbors: list;
can get_status with visitor entry {
# Visitor can access node data safely
status = self.data?.["status"] or "unknown";
visitor.record_status(status);
}
}
Performance Considerations#
Null-safe operations include runtime checks that add minimal overhead while significantly improving code robustness. The compiler optimizes common access patterns to minimize performance impact.
Error Handling#
Standard access operations raise appropriate exceptions for invalid keys or attributes, while null-safe operations return None
for missing intermediate values. This distinction enables explicit error handling strategies based on application requirements.
Subscripted and dotted expressions provide the foundation for safe, expressive data access patterns that integrate naturally with both traditional programming constructs and object-spatial operations.
Function calls#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Function calls in Jac provide the fundamental mechanism for invoking defined functions and methods, supporting both traditional positional arguments and named keyword arguments. The function call system integrates seamlessly with Jac's type system and expression evaluation, enabling flexible and expressive function invocation patterns.
Basic Function Call Syntax
Function calls in Jac follow the familiar pattern:
Function Definition Context
The example demonstrates calling a function with a clear signature:
Key aspects: - Required type annotations: All parameters must specify their types - Multiple return values: Functions can return tuples - Clear interface: Type system provides compile-time verification
Keyword Arguments
Jac supports keyword argument syntax for explicit parameter naming:
Benefits of keyword arguments: - Clarity: Makes function calls self-documenting - Flexibility: Arguments can be provided in any order - Maintainability: Changes to parameter order don't break existing calls - Readability: Complex function calls become more understandable
Complex Expressions as Arguments
Function arguments can be sophisticated expressions:
This demonstrates:
- Conditional expressions: Using ternary operator syntax in arguments
- Variable references: Accessing variables from enclosing scope (a
)
- Expression evaluation: Complex computations resolved before function call
- Type safety: Expression results must match parameter types
Argument Evaluation Order
Arguments are evaluated from left to right before the function is called:
1. x=4
evaluates to 4
2. y=4 if a % 3 == 2 else 3
evaluates the conditional expression
3. z=9
evaluates to 9
4. Function foo
is called with the resolved values
Return Value Handling
Functions can return multiple values as tuples:
Calling code receives the tuple:
The output
variable contains the returned tuple, which can be:
- Used directly: Passed to other functions or printed
- Unpacked: Destructured into individual variables
- Indexed: Accessed using tuple indexing syntax
Mixed Argument Styles
Jac supports combining positional and keyword arguments:
# Positional arguments first
result = foo(4, y=3, z=9);
# All keyword arguments
result = foo(x=4, y=3, z=9);
# All positional arguments
result = foo(4, 3, 9);
Method Calls
Function call syntax extends to method invocation:
Static method calls:
Chained Calls
Function calls can be chained for fluent interfaces:
Function Calls in Expressions
Function calls integrate with all expression contexts:
Arithmetic expressions:
Conditional expressions:
Assignment expressions:
Nested function calls:
Error Handling
Function calls participate in Jac's exception handling:
Type Safety and Validation
Jac's type system ensures function call safety: - Compile-time checking: Argument types verified against parameter types - Type inference: Return types inferred for further usage - Error prevention: Mismatched types caught before runtime
Performance Considerations
Function calls in Jac are optimized for: - Efficient argument passing: Minimal overhead for parameter transmission - Type specialization: Optimized execution paths for specific type combinations - Inlining opportunities: Small functions may be inlined for performance
Integration with Object-Spatial Features
Function calls work seamlessly with Jac's object-spatial constructs:
Within walker abilities:
walker Processor {
can process with `node entry {
result = calculate_value(here.property);
here.update_state(result);
}
}
Within node abilities:
node DataNode {
can process with Walker entry {
processed = transform_data(self.data);
visitor.receive_result(processed);
}
}
Common Patterns
Configuration and Setup
Data Processing
cleaned_data = clean_data(raw_data, rules=cleaning_rules);
processed = transform(cleaned_data, format="json");
Validation and Error Checking
Best Practices
- Use keyword arguments: For functions with multiple parameters of the same type
- Type consistency: Ensure argument expressions match parameter types
- Clear naming: Choose descriptive function and parameter names
- Error handling: Wrap potentially failing function calls in try-catch blocks
- Documentation: Use type annotations to self-document function interfaces
Function calls in Jac provide a robust foundation for code organization and reuse, combining the familiarity of traditional function invocation with the safety and expressiveness of a modern type system. The support for keyword arguments and complex expressions as parameters enables clear, maintainable code that integrates well with both traditional programming patterns and Jac's innovative object-spatial features.
Atom#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Atomic expressions in Jac represent the most basic building blocks of expressions - the fundamental units that cannot be broken down further. These include literals, references, collections, and other primary elements that form the foundation of more complex expressions.
Atomic Expression Types
Atomic expressions in Jac include:
- Named References: Variable names and identifiers (
a
,x
,list1
) - Literals: Direct values embedded in code
- String literals:
"abcde...."
,"aaa"
- Boolean literals:
True
,False
- Numeric literals in various bases:
- Binary:
bin(12)
(though this is a function call) - Hexadecimal:
hex(78)
(though this is a function call)
- Binary:
- Collections:
- Lists:
[2, 3, 4, 5]
- Tuples:
(3, 4, 5)
- F-strings: Template strings with embedded expressions (
f"b{aa}bbcc"
) - Parenthesized Expressions: Expressions wrapped in parentheses for grouping
- Type References: References to type objects using the backtick operator (`)
Implementation Blocks
The code example demonstrates the use of impl
blocks, which can contain atomic expressions as initialization values. The impl x
block shows how atomic expressions can be used within implementation contexts.
Global Variables
Atomic expressions are commonly used in global variable declarations, as shown with glob c = (3, 4, 5), list1 = [2, 3, 4, 5]
where tuples and lists serve as atomic collection expressions.
String Concatenation and F-strings
Jac supports string concatenation using the +
operator and f-string interpolation where expressions can be embedded within strings using curly braces ({}
). The example shows "aaa" + f"b{aa}bbcc"
combining a regular string with an f-string.
Enumeration Access
The example also demonstrates atomic access to enumeration values using dot notation (x.y.value
), showing how atomic expressions can chain together to access nested properties.
Atomic expressions serve as the fundamental building blocks that combine with operators and control structures to create more complex Jac programs. Understanding these basic elements is essential for writing any Jac code.
Collection values#
Code Example
Runnable Example in Jac and JacLib
with entry {
squares = {num: num ** 2 for num in range(1, 6)};
even_squares_set = {num ** 2 for num in range(1, 11) if num % 2 == 0};
squares_generator = (num ** 2 for num in range(1, 6));
squares_list = [num ** 2 for num in squares_generator if num != 9];
print(
"\n".join(
[str(squares), str(even_squares_set), str(squares_list)]
)
);
print(
{"a": "b", "c": "d"}, # Dictionary
{"a"}, # Set
("a", ), # Tuple
['a'] # List
);
}
Jac Grammar Snippet
Description
Collection values in Jac provide rich data structures for organizing and manipulating groups of related data. Jac supports all major collection types found in modern programming languages, along with powerful comprehension syntax for creating collections programmatically.
Basic Collection Types
Dictionary (dict
)
- Key-value mappings using curly braces: {"a": "b", "c": "d"}
- Keys and values can be of any type
- Mutable and ordered (insertion order preserved)
Set (set
)
- Unordered collections of unique elements: {"a"}
- Automatically eliminates duplicates
- Mutable and supports mathematical set operations
Tuple (tuple
)
- Ordered, immutable sequences: ("a", )
- Note the trailing comma for single-element tuples
- Useful for fixed-size data groupings
List (list
)
- Ordered, mutable sequences: ['a']
- Support indexing, slicing, and dynamic resizing
- Most versatile collection type for general use
Collection Comprehensions
Jac supports comprehensive syntax for creating collections using iterative expressions:
Dictionary Comprehensions
Set Comprehensions
List Comprehensions
Generator Expressions
Comprehension Features
- Filtering: Use
if
conditions to selectively include elements - Transformation: Apply expressions to transform source data
- Nested iteration: Support for multiple
for
clauses - Conditional logic: Complex filtering and transformation logic
The provided code example demonstrates practical usage of all collection types and comprehensions, showing how to create dictionaries with computed values, filter sets based on conditions, generate sequences efficiently, and work with basic collection literals.
Tuples and Jac Tuples#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
Description
Jac provides two distinct tuple syntaxes that serve different programming needs: traditional positional tuples for ordered data and keyword tuples for labeled data structures that integrate seamlessly with pipe operations and object-spatial programming.
Positional Tuples#
Positional tuples follow Python's immutable ordered collection semantics:
coords = (10, 20);
print(coords[0]); # → 10
# Single-element tuples require trailing comma
singleton = (42,);
Positional tuples support standard sequence operations including slicing, concatenation, and indexing, providing familiar behavior for developers transitioning from Python.
Keyword Tuples#
Keyword tuples are a Jac-specific extension that associates labels with tuple elements, creating self-documenting data structures:
Each element in a keyword tuple is tagged with a field name that persists at runtime, enabling both dot notation and dictionary-style access patterns.
Pipeline Integration#
Keyword tuples integrate naturally with Jac's pipe operators, enabling clean parameter passing without explicit argument lists:
walker DataProcessor {
can analyze with entry {
# Build labeled data tuple and pipe to function
(node_id=here.id, data_size=len(here.data),
neighbor_count=len([-->])) |> process_metrics;
}
}
def process_metrics(node_id: str, data_size: int, neighbor_count: int) {
print(f"Node {node_id}: {data_size} bytes, {neighbor_count} neighbors");
}
This pattern eliminates the need for long parameter lists while maintaining clear semantic meaning.
Mixed Tuple Syntax#
Jac allows combining positional and keyword elements within a single tuple, with positional elements required to precede keyword elements:
This ordering constraint ensures unambiguous parsing while providing flexibility for complex data structures.
Destructuring Assignment#
Both tuple types support destructuring assignment with appropriate syntax:
# Positional destructuring
let (x, y) = coords;
# Keyword destructuring (order-independent)
let (y=latitude, x=longitude) = point;
Keyword destructuring matches variables by label rather than position, providing more robust code when tuple structure evolves.
Object-Spatial Applications#
Tuples integrate effectively with object-spatial programming patterns:
node MetricsNode {
can compute_stats with visitor entry {
# Create labeled metrics tuple
stats = (
processing_time=self.get_processing_time(),
memory_usage=self.get_memory_usage(),
throughput=self.calculate_throughput()
);
# Pass to visitor for aggregation
visitor.collect_metrics(stats);
}
}
walker MetricsCollector {
has collected_metrics: list = [];
can collect_metrics(metrics: tuple) {
self.collected_metrics.append(metrics);
}
}
Performance and Memory Considerations#
Both tuple types are immutable, ensuring thread safety and enabling optimization opportunities. Keyword tuples carry additional metadata for field names but provide enhanced readability and maintainability for complex data structures.
Usage Guidelines#
Positional tuples are ideal for simple ordered data, mathematical coordinates, and compatibility with Python libraries.
Keyword tuples excel in heterogeneous data representation, pipeline operations, and scenarios requiring explicit semantic labeling.
The choice between tuple types should reflect the intended use pattern and the importance of self-documenting code structure in the specific application context.
Object-Spatial References#
Code Example
Runnable Example in Jac and JacLib
walker Creator {
can create with `root entry;
}
node node_a {
has val: int;
can make_something with Creator entry;
}
edge connector {
has value: int = 10;
}
impl Creator.create {
end = here;
for i=0 to i<3 by i+=1 {
end ++> (end := node_a(val=i));
}
end +>:connector:value=i:+> (end := node_a(val=i + 10));
root <+:connector:value=i:<+ (end := node_a(val=i + 10));
visit [-->];
}
impl node_a.make_something {
i = 0;
while i < 5 {
print(f"wlecome to {self}");
i += 1;
}
}
with entry {
root spawn Creator();
}
Jac Grammar Snippet
Description
Data spatial references provide specialized syntax for navigating and manipulating topological structures, enabling direct expression of graph relationships and traversal patterns. These references make topological relationships first-class citizens in the programming model.
Edge Reference Syntax#
Edge references use square bracket notation with directional operators to express graph navigation:
[-->] # All nodes connected by outgoing edges (default)
[<--] # All nodes connected by incoming edges (default)
[<-->] # All nodes connected by bidirectional edges (default)
[edge -->] # All outgoing edges themselves
[edge <--] # All incoming edges themselves
[edge <-->] # All bidirectional edges themselves
[-->:EdgeType:] # Typed nodes via outgoing edges
[edge -->:EdgeType:] # Typed outgoing edges themselves
[node --> target] # Specific edge path to nodes
[edge --> target] # Specific edges in path
The square bracket syntax creates collections of edges or nodes that can be used for traversal, filtering, or manipulation operations. By default, edge reference syntax returns the connected nodes, not the edges themselves. To explicitly reference edges, use the edge
keyword prefix.
Node vs Edge References#
Understanding the distinction between node and edge references is crucial for effective graph navigation:
Default Node References:
[-->] # Returns: connected nodes via outgoing edges
[<--] # Returns: connected nodes via incoming edges
visit [-->]; # Walker visits the connected nodes
Explicit Edge References:
[edge -->] # Returns: the edge objects themselves
[edge <--] # Returns: incoming edge objects
visit [edge -->]; # Walker visits the edges (and their connected nodes)
When a walker visits edges explicitly, it will execute abilities on both the edge and its connected node, providing access to edge properties and enabling edge-based computation.
Directional Navigation#
Directional operators express the flow of relationships within the graph:
Outgoing (-->
): References edges that originate from the current node, representing relationships where the current node is the source.
Incoming (<--
): References edges that terminate at the current node, representing relationships where the current node is the target.
Bidirectional (<-->
): References edges that can be traversed in either direction, representing symmetric relationships.
Edge Connection Operations#
Connection operators create new edges between nodes, establishing topological relationships:
source_node ++> target_node; # Create directed edge
source_node <++ target_node; # Create reverse directed edge
source_node <++> target_node; # Create bidirectional edge
source_node ++>:EdgeType(weight=5):++> target; # Create typed edge with data
These operators enable dynamic graph construction where relationships can be established programmatically based on computational logic.
Edge Disconnection Operations#
The del
operator removes edges from the graph structure:
del source_node --> target_node; # Remove specific edge
del [-->]; # Remove all outgoing edges
del [<--:FollowEdge:]; # Remove typed incoming edges
Disconnection operations maintain graph integrity by properly cleaning up references and ensuring consistent topological state.
Filtered References#
Edge references support inline filtering for selective graph operations:
# Node filtering (default behavior)
[-->(active == true)] # Nodes that are active
[<--(score > threshold)] # Nodes with high scores
[<-->(?name.startswith("test"))] # Nodes with test names
# Edge filtering (explicit edge references)
[edge -->(weight > threshold)] # Edges meeting weight criteria
[edge <--:FollowEdge:] # Incoming edges of specific type
[edge <-->(`ConnectionType)] # Edges of specific type
Filtering enables precise graph queries that combine topological navigation with data-driven selection criteria. When filtering edges explicitly, the walker can access edge properties during traversal.
Integration with Walker Traversal#
Data spatial references integrate seamlessly with walker traversal patterns:
walker NetworkAnalyzer {
has visited: set = set();
can explore with entry {
# Mark current node as visited
self.visited.add(here);
# Find unvisited neighbors (returns nodes by default)
unvisited_neighbors = [-->] |> filter(|n| n not in self.visited);
# Continue traversal to unvisited nodes
if (unvisited_neighbors) {
visit unvisited_neighbors; # Visits nodes only
}
# Analyze connection patterns (edge references)
strong_connections = [edge <-->:StrongEdge:];
weak_connections = [edge <-->:WeakEdge:];
# Visit edges to analyze their properties
# This will execute abilities on both edges and connected nodes
visit [edge -->:AnalysisEdge:];
# Report analysis results
report {
"node_id": here.id,
"strong_count": len(strong_connections),
"weak_count": len(weak_connections)
};
}
}
When visiting edges explicitly with visit [edge -->]
, the walker will:
1. Execute entry abilities on the edge
2. Automatically queue and visit the connected node
3. Execute abilities on both the edge and the target node
When visiting nodes with visit [-->]
(default), the walker will:
1. Execute abilities only on the target nodes
2. Skip edge traversal abilities
Type-Safe Graph Operations#
References support type checking and validation for robust graph manipulation:
node DataNode {
has data: dict;
has node_type: str;
}
edge ProcessingEdge(DataNode, DataNode) {
has processing_weight: float;
has edge_type: str = "processing";
}
walker TypedProcessor {
can process with DataNode entry {
# Type-safe edge references
processing_edges = [-->:ProcessingEdge:];
# Filtered by edge properties
high_priority = processing_edges |> filter(|e| e.processing_weight > 0.8);
# Continue to high-priority targets
visit high_priority |> map(|e| e.target);
}
}
Dynamic Graph Construction#
References enable dynamic graph construction based on runtime conditions:
walker GraphBuilder {
can build_connections with entry {
# Analyze current node data
similarity_threshold = 0.7;
# Find similar nodes in the graph
all_nodes = [-->*]; # Get all reachable nodes
similar_nodes = all_nodes |> filter(|n|
calculate_similarity(here.data, n.data) > similarity_threshold
);
# Create similarity edges
for similar_node in similar_nodes {
similarity_score = calculate_similarity(here.data, similar_node.data);
here ++>:SimilarityEdge(score=similarity_score):++> similar_node;
}
}
}
Data spatial references provide the foundational syntax for expressing topological relationships and enabling computation to flow naturally through graph structures, making complex graph algorithms both intuitive and maintainable.
Edge vs Node Traversal Behavior#
Understanding the distinction between edge and node traversal is fundamental to effective data spatial programming:
Default Node Traversal:
- [-->]
returns connected nodes, not edges
- visit [-->]
executes abilities only on target nodes
- Walker moves directly from node to node
- Edge properties are not accessible during traversal
Explicit Edge Traversal:
- [edge -->]
returns edge objects themselves
- visit [edge -->]
executes abilities on both edges and nodes
- Walker processes edge first, then automatically queues connected node
- Full access to edge properties and data during traversal
This distinction enables precise control over computational flow:
# Process only nodes
visit [-->]; # Direct node-to-node movement
# Process edges and nodes
visit [edge -->]; # Edge abilities execute, then node abilities
# Access edge data without traversal
edge_weights = [edge -->] |> map(|e| e.weight);
# Filter by edge properties, visit connected nodes
high_priority_nodes = [edge -->(priority > 0.8)] |> map(|e| e.target);
visit high_priority_nodes;
The choice between node and edge traversal depends on whether edge computation or properties are needed for the algorithm's logic.
Special Comprehensions#
Code Example
Runnable Example in Jac and JacLib
#Filter comprehension
import random;
obj TestObj {
has x: int = random.randint(0, 15),
y: int = random.randint(0, 15),
z: int = random.randint(0, 15);
}
with entry {
random.seed(42);
apple = [];
for i=0 to i<100 by i+=1 {
apple.append(TestObj());
}
# check if all apple's x are random between 0 and 15
print(apple(?x >= 0, x <= 15) == apple);
}
obj MyObj {
has apple: int = 0,
banana: int = 0;
}
with entry {
x = MyObj();
y = MyObj();
mvar = [x, y](=apple=5, banana=7);
print(mvar);
}
Jac Grammar Snippet
Description
Special comprehensions in Jac extend traditional list comprehensions with powerful filtering and assignment capabilities. These constructs enable concise manipulation of data structures, particularly in graph traversal contexts.
Filter Comprehensions#
Filter comprehensions apply conditional filtering with optional null-safety:
# Basic filter comprehension
(property > value)
# Null-safe filter
(? property > value)
# Typed filter comprehension
(`TypeName: property == value)
Usage in Context:
# Filter nodes by property
filtered_nodes = [-->(?score > 0.5)];
# Filter with type checking
typed_edges = [<--(`Connection: weight > 10)];
Assignment Comprehensions#
Assignment comprehensions enable in-place property updates:
Practical Applications:
walker Updater {
can update with entry {
# Update all connected nodes
[-->](=visited: True, timestamp: now());
# Conditional update with filter
[-->(score > 0.8)](=category: "high");
}
}
Filter Compare Lists#
Complex filtering with multiple conditions:
# Multiple property comparisons
(age > 18, status == "active", score >= 0.7)
# Mixed comparisons
(name != "admin", role in ["user", "guest"])
Typed Filter Compare Lists#
Type-specific filtering with property constraints:
# Type with property filters
`UserNode: (active == True, last_login > cutoff_date)
# Edge type filtering
`FriendEdge: (mutual == True, years > 2)
Integration with Object-Spatial Operations#
Special comprehensions shine in graph operations:
node DataNode {
has value: float;
has category: str;
has processed: bool = False;
}
walker Processor {
can process with entry {
# Filter and traverse
high_value = [-->(value > 100)];
visit high_value;
# Update visited nodes
[-->](=processed: True);
# Complex filtering
candidates = [<--(`DataNode: (
category in ["A", "B"],
processed == False,
value > threshold
))];
}
}
Comparison Operators#
Available operators for filter comprehensions:
- ==
, !=
: Equality comparisons
- >
, <
, >=
, <=
: Numeric comparisons
- in
, not in
: Membership tests
- is
, is not
: Identity comparisons
Null-Safe Operations#
The ?
operator enables safe property access:
# Safe navigation
[-->(?nested?.property > 0)]
# Combines with assignment
[-->(?exists)](=checked: True)
Special comprehensions provide a declarative, concise syntax for complex filtering and updating operations, particularly powerful when combined with Jac's graph traversal capabilities. They reduce boilerplate code while maintaining readability and type safety.
Names and references#
Code Example
Runnable Example in Jac and JacLib
obj Animal {
has species: str;
has sound: str;
}
obj Dog(Animal) {
has breed: str;
has trick: str by postinit;
def postinit {
self.trick = "Roll over";
}
}
obj Cat(Animal) {
def init(fur_color: str) {
super.init(species="Cat", sound="Meow!");
self.fur_color = fur_color;
}
}
with entry {
dog = Dog(breed="Labrador", species="Dog", sound="Woof!");
cat = Cat(fur_color="Tabby");
print(dog.breed, dog.sound, dog.trick);
# print(f"The dog is a {dog.breed} and says '{dog.sound}'");
# print(f"The cat's fur color is {cat.fur_color}");
}
Jac Grammar Snippet
Description
Jac employs a familiar identifier system similar to Python and C-style languages while introducing specialized references essential for object-spatial programming. The naming system supports both traditional programming patterns and the unique requirements of computation moving through topological structures.
Standard Identifiers#
Standard identifiers follow conventional rules: they must begin with an ASCII letter or underscore, followed by any combination of letters, digits, or underscores:
Keyword Escaping#
When necessary, keywords can be used as identifiers by wrapping them with angle brackets:
This escaping mechanism provides flexibility when interfacing with external systems or when identifier names conflict with Jac keywords.
Special References#
Jac provides built-in special references that enable object-spatial programming patterns. These references have well-defined semantic meanings and cannot be reassigned:
Reference | Context | Purpose |
---|---|---|
self |
Archetype methods/abilities | Current instance reference |
here |
Walker abilities | Current node/edge location |
visitor |
Node/edge abilities | Visiting walker reference |
super |
Archetype methods | Parent archetype access |
root |
Any context | Root graph instance |
init /postinit |
Archetype bodies | Lifecycle hook references |
Explicit Notation for Special Variables#
These keywords are reserved by the language and must appear exactly as shown. They cannot be redefined or used for other identifiers. Their explicit spelling makes object-spatial code easier to read and prevents accidental shadowing of core context references.
Object-Spatial Reference Usage#
Special references enable the bidirectional interaction model central to object-spatial programming:
node DataNode {
has name: str;
has data: dict;
can process with visitor entry {
# 'self' refers to this node, 'visitor' to the walker
print(f"Node {self.name} processing data for {visitor.id}");
# Process data and update visitor state
result = self.analyze_data();
visitor.add_result(result);
}
}
walker DataProcessor {
has id: str;
has results: list = [];
can explore with entry {
# 'self' refers to this walker, 'here' to current location
print(f"Walker {self.id} arrived at {here.name}");
# Continue traversal based on local context
if (here.has_more_data()) {
visit here.neighbors;
}
}
}
Name Resolution Hierarchy#
Jac resolves names using a systematic hierarchy:
- Local scope: Parameters, local variables, and
let
bindings - Enclosing archetype scope: Instance variables and methods
- Module scope: Module-level definitions and globals
- Imported modules: External module references
- Built-in references: Special references and system functions
This resolution order ensures predictable behavior while supporting both lexical scoping and object-spatial context access.
Naming Conventions#
Consistent naming enhances code clarity and supports Jac's static analysis capabilities:
- Variables and functions:
lower_snake_case
- Archetypes and enums:
UpperCamelCase
- Constants:
UPPER_SNAKE_CASE
- Special references: Reserved lowercase names
Descriptive naming is particularly important in object-spatial contexts where walkers, nodes, and edges interact dynamically, making clear semantic meaning essential for maintainable code.
The naming system provides the foundation for clear, expressive object-spatial programs where computation flows through well-defined topological structures with unambiguous reference semantics.
Builtin types#
Code Example
Runnable Example in Jac and JacLib
glob a = 9.2;
glob b = 44;
glob c = [2, 4, 6, 10];
glob d = {'name':'john', 'age':28 };
glob e = ("jaseci", 5, 4, 14);
glob f = True;
glob g = "Jaseci";
glob h = {5, 8, 12, "unique"};
with entry {
print(type(a), '\n', type(b), '\n', type(c), '\n', type(d), '\n', type(e));
print(type(f), '\n', type(g), '\n', type(h));
}
Jac Grammar Snippet
Description
Jac provides a rich set of built-in data types that cover the fundamental data structures needed for most programming tasks. These types are similar to Python's built-in types but are integrated into Jac's type system and syntax.
Primitive Types
int
: Integer numbers (e.g.,42
,-17
,0
)float
: Floating-point numbers (e.g.,3.14
,-2.5
,1e-10
)str
: String literals (e.g.,"hello"
,'world'
)bool
: Boolean values (True
orFalse
)bytes
: Byte sequences for binary data
Collection Types
list
: Ordered, mutable sequences (e.g.,[1, 2, 3]
,['a', 'b', 'c']
)tuple
: Ordered, immutable sequences (e.g.,(1, 2, 3)
,('a', 'b')
)dict
: Key-value mappings (e.g.,{'name': 'john', 'age': 28}
)set
: Unordered collections of unique elements (e.g.,{1, 2, 3}
,{'unique', 'values'}
)
Meta Types
type
: Represents the type of a type (metaclass)any
: Represents any type (used for type annotations when type is unknown or flexible)
Implicit Typing Library#
Jac exposes common generics from Python's typing
module without requiring
explicit imports. Standard type names such as List
, Dict
, and Optional
can be referenced directly by prefixing them with a backtick:
These identifiers are recognized by the compiler automatically, simplifying type
annotations and eliminating repetitive import statements. All available types from
python typing
library is availabe through this idiom.
Type Usage
Built-in types can be used in several contexts:
- Variable declarations:
glob name: str = "Jaseci";
- Function parameters:
def process(data: list) -> dict { ... }
- Type checking:
type(variable)
returns the type of a variable - Type annotations: Providing explicit type information for better code clarity
Type Inference
Jac can automatically infer types from literal values:
- 9.2
→ float
- 44
→ int
- [2, 4, 6, 10]
→ list
- {'name':'john', 'age':28}
→ dict
- ("jaseci", 5, 4, 14)
→ tuple
- True
→ bool
- "Jaseci"
→ str
- {5, 8, 12, "unique"}
→ set
The provided code example demonstrates the declaration of global variables using different built-in types and shows how the type()
function can be used to inspect the runtime type of variables. This type introspection capability is useful for debugging and dynamic programming scenarios.
f-string tokens#
Code Example
Runnable Example in Jac and JacLib
with entry {
x = "a";
y = 25;
print(f"Hello {x} {y} {{This is an escaped curly brace}}");
person = {"name":"Jane", "age":25 };
print(f"Hello, {person['name']}! You're {person['age']} years old.");
print("This is the first line.\n This is the second line.");
print("This will not print.\r This will be printed");
print("This is \t tabbed.");
print("Line 1\fLine 2");
words = ["Hello", "World!", "I", "am", "a", "Jactastic!"];
print(f"{'\n'.join(words)}");
}
Jac Grammar Snippet
Description
F-string tokens in Jac provide formatted string literals with embedded expressions, enabling dynamic string construction with type-safe expression evaluation. F-strings offer a readable and efficient way to create formatted text.
Basic F-String Syntax#
Expression Embedding#
F-strings can embed any valid Jac expression:
# Variables and arithmetic
width = 10;
height = 5;
area_text = f"Area: {width * height} square units";
# Function calls
import math;
radius = 7.5;
circle_info = f"Circle area: {math.pi * radius ** 2:.2f}";
# Method calls
text = "hello world";
formatted = f"Uppercase: {text.upper()}, Length: {len(text)}";
Format Specifications#
value = 3.14159;
formatted = f"Pi: {value:.2f}"; # 2 decimal places
scientific = f"Value: {value:.2e}"; # Scientific notation
number = 255;
binary = f"Binary: {number:b}"; # Binary representation
hex_val = f"Hex: {number:x}"; # Hexadecimal
Object-Spatial Integration#
walker ReportGenerator {
can generate_report with entry {
node_info = f"Node {here.id}: value={here.value}, neighbors={len([-->])}";
print(node_info);
visit [-->];
}
}
Multi-Line F-Strings#
user = {"name": "Alice", "email": "alice@example.com"};
report = f"""
User Report:
Name: {user['name']}
Email: {user['email']}
Status: {'Active' if user.get('active', True) else 'Inactive'}
""";
Complex Expressions#
# Conditional expressions
score = 85;
grade = f"Grade: {('A' if score >= 90 else 'B' if score >= 80 else 'C')}";
# Safe null handling
safe = f"Name: {user.name if user else 'Unknown'}";
Performance Considerations#
- Compile-time expression parsing
- Efficient concatenation without multiple string operations
- Type-aware formatting optimization
Best Practices#
- Keep expressions simple within f-strings
- Use format specifications for consistent output
- Handle None values with conditional expressions
- Break complex f-strings into multiple lines when needed
F-strings provide a powerful and efficient mechanism for string formatting in Jac, supporting both simple variable interpolation and complex expression evaluation while maintaining type safety.
Lexer Tokens#
Code Example
Runnable Example in Jac and JacLib
Jac Grammar Snippet
489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 |
|
Description
Lexer tokens in Jac define the fundamental building blocks that the lexical analyzer recognizes when parsing source code. These tokens represent the smallest meaningful units of the language.
Token Categories#
Built-in type tokens:
Declaration keywords:
Control flow keywords:
Data spatial keywords:
Operator Tokens#
Arithmetic operators:
Comparison operators:
Assignment operators:
Data spatial operators:
Literal Tokens#
Special Reference Tokens#
Delimiter Tokens#
Comment Tokens#
Single-line comments begin with #
and extend to the end of the line. Jac also
supports multiline comments delimited by #*
and *#
:
Identifier Rules#
- Case-sensitive token recognition
- Keywords take precedence over identifiers
- Escaped identifiers:
<>reserved_word
Lexical Analysis Process#
- Character stream processing
- Token recognition using longest match
- Token classification and value assignment
- Error reporting with position information
- Token stream generation for parser
Understanding lexer tokens is fundamental to writing correct Jac code, as these tokens form the basic vocabulary for the language parser.