Akita Basics 3: An Event-Based Ping Simulation

Yifan Sun
Akita Simulation
Published in
5 min readJan 3, 2020

So far, we have introduced the essential elements of the Akita simulation framework. This time we use the existing concept to implement a small simulation.

We let one component to send a ping message to another component. The receiver delays 2 seconds and sends the response.

We first define 2 messages, a Ping Message and a Ping Response. The only information that is carried with the messages is the ping sequence ID (SeqID) for the agents to tell which ping message it is.

type PingMsg struct {
akita.MsgMeta
SeqID int
}
func (p *PingMsg) Meta() *akita.MsgMeta {
return &p.MsgMeta
}
type PingRsp struct {
akita.MsgMeta
SeqID int
}
func (p *PingRsp) Meta() *akita.MsgMeta {
return &p.MsgMeta
}

We need to tell the agent when to initiate a ping. We create an event for this. Here, when we define an event, we embed a field called EventBase. It serves as a base class, providing must-have fields and trivial implementations of setters/getters.

type StartPingEvent struct {
*akita.EventBase
Dst akita.Port
}

Also, when an agent receives a ping message, it needs to trigger an event for responding to the message. We name this event as RspPingEvent.

type RspPingEvent struct {
*akita.EventBase
pingMsg *PingMsg
}

With all the events and messages defined, we can now start to implement the agent component. We first define the struct and its constructor.

type PingAgent struct {
*akita.ComponentBase
Engine akita.Engine
OutPort akita.Port
startTime []akita.VTimeInSec
nextSeqID int
}
func NewPingAgent(name string, engine akita.Engine) *PingAgent {
agent := &PingAgent{Engine: engine}
agent.ComponentBase = akita.NewComponentBase(name)
agent.OutPort = akita.NewLimitNumMsgPort(
agent, 4, name+”.OutPort”)
return agent
}

In the struct definition, we see some boilerplate field definitions. The ComponentBase implements trivial functions such as Name(). The agent depends on an Engine to schedule events. We, therefore, inject the dependency in the constructor. The ping agent also only has one port connecting to the outside world. We call it OutPort. In the constructor, we create the port as the default NewLimitNumMsgPort, which can buffer 4 incoming messages.

StartTime and nextSeqID is the PingAgent-specific states. StartTime stores all the out-going ping start time, and nextSeqID is the sequence ID of the next ping message to send.

A component needs to implement 3 methods.

The first one is called NotifyPortFree. It is called when a port’s buffer frees up at least one slot. In this example, we do not need to use this function. So we leave it empty. We will see how it is used to improve simulation performance in the next tutorial.

func (p PingAgent) NotifyPortFree(
now akita.VTimeInSec,
port akita.Port,
) {
// Do nothing
}

The second is NotifyRecv. A port calls this function when the port receives an incoming message. In this function, we first extract the message from the port’s buffer with the Retrieve function. Since the PingAgent can process both PingMsg and PingRsp, we use a type switch to differentiate the message types and define what happens when the agent receives each type of messages.

func (p *PingAgent) NotifyRecv(
now akita.VTimeInSec,
port akita.Port,
) {
p.Lock()
defer p.Unlock()
msg := port.Retrieve(now)
switch msg := msg.(type) {
case *PingMsg:
p.processPingMsg(now, msg)
case *PingRsp:
p.processPingRsp(now, msg)
default:
panic(“cannot process msg of type “ +
reflect.TypeOf(msg).String())
}
}

When programming Akita, we follow a convention: we “process messages” and “handle events”. With this convention, one can quickly determine what the object type from the verb.

When PingMsg arrives, we schedule an event 2 seconds later.

func (p *PingAgent) processPingMsg(
now akita.VTimeInSec,
msg *PingMsg,
) {
rspEvent := RspPingEvent{
EventBase: akita.NewEventBase(now+2, p),
pingMsg: msg,
}
p.Engine.Schedule(rspEvent)
}

When PingRsp arrives, we find out the send time and print the ping latency.


func (p *PingAgent) processPingRsp(
now akita.VTimeInSec,
msg *PingRsp,
) {
seqID := msg.SeqID
startTime := p.startTime[seqID]
duration := now — startTime
fmt.Printf(“Ping %d, %.2f\n”, seqID, duration)
}

A PingAgent should also be able to handle 2 types of events. One is the RspPingEvent scheduled when the Agent process a PingMsg. The other one is the StartPingEvent, which will be scheduled by the main function later. So, we define the third function. Similarly, we use the type switch to determine the type of the event.

func (p *PingAgent) Handle(e akita.Event) error {
p.Lock()
defer p.Unlock()
switch e := e.(type) {
case StartPingEvent:
p.StartPing(e)
case RspPingEvent:
p.RspPing(e)
default:
panic(“cannot handle event of type “ +
reflect.TypeOf(e).String())
}
return nil
}

During each event, we need to send a message out. For each message, the sender is responsible to set the source, the destination, and the send time.

func (p *PingAgent) StartPing(evt StartPingEvent) {
pingMsg := &PingMsg{
SeqID: p.nextSeqID,
}
pingMsg.Src = p.OutPort
pingMsg.Dst = evt.Dst
pingMsg.SendTime = evt.Time()
p.OutPort.Send(pingMsg) p.startTime = append(p.startTime, evt.Time()) p.nextSeqID++
}
func (p *PingAgent) RspPing(evt RspPingEvent) {
msg := evt.pingMsg
rsp := &PingRsp{
SeqID: msg.SeqID,
}
rsp.SendTime = evt.Time()
rsp.Src = p.OutPort
rsp.Dst = msg.Src
p.OutPort.Send(rsp)
}

One thing you may have noticed and we have not discussed yet is how we applied locks. It is crucial to apply locks as the simulation runs in parallel. Two events of the same component may be handled by two threads simultaneously. Message receiving can also happen at the same time.

Remember, we do not allow inter-component field access? Locking is very simple as we only need to lock the whole Handle and NotifyRecv function. It is very not likely to create race conditions and deadlocks.

Now the only task is to write the main function and connect everything. In this example, we define an SierialEngine, a DirectConnection, and two PingAgents.

engine := akita.NewSerialEngine()
conn := akita.NewDirectConnection(“Conn”, engine, 1*akita.GHz)
agentA := NewPingAgent(“AgentA”, engine)
agentB := NewPingAgent(“AgentB”, engine)

We connect the two agents together with the following code. Here, the second argument of the PlugIn function tells the receiver side buffer size. It is safe to set it as 1 for now.

conn.PlugIn(agentA.OutPort, 1)
conn.PlugIn(agentB.OutPort, 1)

Finally, we schedule two events and start the simulation. You should see the output for two messages. Both the latencies are 2 seconds.

e1 := StartPingEvent{
EventBase: akita.NewEventBase(1, agentA),
Dst: agentB.OutPort,
}
e2 := StartPingEvent{
EventBase: akita.NewEventBase(3, agentA),
Dst: agentB.OutPort,
}
engine.Schedule(e1)
engine.Schedule(e2)
engine.Run()
// Output:
// Ping 0, 2.00
// Ping 1, 2.00

As a summary, the Akita framework utilizes simple conceptions, including events, messages, ports, connections, components, to define a simulation. Users need to define special events, messages; and implement components; to simulate customized systems.

The existing simulation model is sufficient for most of the simulation requirements. However, we do feel it challenging to implement a pure event-driven simulation. A Ping Agent already involves 2 event types and 2 message types. Complex components such as caches may involve a large number of events and messages, making understanding the logic extremely challenging. Traditional cycle-based simulation has advantages of simplicity. In the next tutorial, we will see how we use the exiting Akita framework to implement cycle-based simulation while still maintaining high performance.

Next: Ticking

--

--

Yifan Sun
Akita Simulation

Assistant Professor @ William & Mary, Computer Architect, Computer Architecture Simulator Designer, Go Programmer