| @@ -341,4 +341,15 @@ project/plugins/project/ | |||||
| hs_err_pid* | hs_err_pid* | ||||
| *.fir | *.fir | ||||
| *.json | |||||
| *.json | |||||
| # ENSIME, metals and friends | |||||
| .ensime* | |||||
| .metals* | |||||
| .bloop* | |||||
| .projectile | |||||
| target/ | |||||
| scratchpad.scala | |||||
| log/ | |||||
| TODO.org | |||||
| index.html | |||||
| @@ -1,26 +1,6 @@ | |||||
| * Thoughts | |||||
| For RISC-V bruk SODOR, finn på chisel sia | |||||
| * Doing | |||||
| Finn ut hvordan bundles, defs etc burde fungere. | |||||
| * Now | |||||
| ** DONE Port Babby to chisel3 | |||||
| *** DONE compile and run | |||||
| *** DONE fix deprecation | |||||
| ** TODO Software test suite POC | |||||
| * Later | |||||
| ** TODO Set up folder structure | |||||
| ** TODO Figure out how to run tests | |||||
| * Milestones | |||||
| ** TODO Øving 0 | |||||
| ** TODO Hardware test suite POC | |||||
| ** TODO Finalize Øving 0 | |||||
| ** TODO Øving 1 | |||||
| ** TODO Øving 2 | |||||
| * Points | |||||
| ** | |||||
| * Tutorials | * Tutorials | ||||
| https://github.com/ucb-bar/generator-bootcamp | https://github.com/ucb-bar/generator-bootcamp | ||||
| @@ -46,8 +46,23 @@ val defaultVersions = Map( | |||||
| libraryDependencies ++= (Seq("chisel3","chisel-iotesters").map { | libraryDependencies ++= (Seq("chisel3","chisel-iotesters").map { | ||||
| dep: String => "edu.berkeley.cs" %% dep % sys.props.getOrElse(dep + "Version", defaultVersions(dep)) }) | dep: String => "edu.berkeley.cs" %% dep % sys.props.getOrElse(dep + "Version", defaultVersions(dep)) }) | ||||
| val versionOfScala = "2.12.4" | |||||
| val fs2Version = "0.10.3" | |||||
| val catsVersion = "1.1.0" | |||||
| val catsEffectVersion = "0.10" | |||||
| libraryDependencies ++= Dependencies.backendDeps.value | |||||
| scalacOptions ++= scalacOptionsVersion(scalaVersion.value) | scalacOptions ++= scalacOptionsVersion(scalaVersion.value) | ||||
| scalacOptions ++= Seq("-language:reflectiveCalls") | scalacOptions ++= Seq("-language:reflectiveCalls") | ||||
| javacOptions ++= javacOptionsVersion(scalaVersion.value) | javacOptions ++= javacOptionsVersion(scalaVersion.value) | ||||
| // testOptions in Test += Tests.Argument("-oF") | |||||
| resolvers += Resolver.sonatypeRepo("releases") | |||||
| addCompilerPlugin("org.spire-math" %% "kind-projector" % "0.9.7") | |||||
| addCompilerPlugin("com.olegpy" %% "better-monadic-for" % "0.2.4") | |||||
| addCompilerPlugin("org.scalamacros" % "paradise" % "2.1.1" cross CrossVersion.full) | |||||
| testOptions in Test += Tests.Argument(TestFrameworks.ScalaTest, "-eS") | |||||
| @@ -4,122 +4,578 @@ | |||||
| In this exercise you will implement a circuit capable of performing matrix | In this exercise you will implement a circuit capable of performing matrix | ||||
| matrix multiplication in the chisel hardware description language. | matrix multiplication in the chisel hardware description language. | ||||
| HAND IN YOUR CODE IN AN ARCHIVE WITH YOUR USERNAME (e.g peteraa_ex0). | |||||
| PLEASE ENSURE THAT WHEN UNZIPPING THE TESTS CAN BE RUN. | |||||
| * Prerequisites | |||||
| You should have some idea of how digital logic circuits work. | |||||
| * Terms | |||||
| Before delving into code it's necessary to define some terms. | |||||
| + Wire | |||||
| A wire is a bundle of 1 to N condictive wires (yes, that is a recursive | |||||
| definition, but I think you get what I mean). These wires are connected | |||||
| either to ground or a voltage source, corresponding to 0 or 1, which | |||||
| is useful for representing numbers | |||||
| We can define a wire consisting of 4 physical wires in chisel like this | |||||
| #+begin_src scala | |||||
| val myWire = Wire(UInt(4.W)) | |||||
| #+end_src | |||||
| + Driving | |||||
| A wire in on itself is rather pointless since it doesn't do anything. | |||||
| In order for something to happen we need to connect them. | |||||
| #+begin_src scala | |||||
| val wireA = Wire(UInt(4.W)) | |||||
| val wireB = Wire(UInt(4.W)) | |||||
| wireA := 2.U | |||||
| wireB := wireA | |||||
| #+end_src | |||||
| Here wireA is driven by the signal 2.U, and wireB is driven by wireA. | |||||
| For well behaved circuits it does not make sense to let a wire be driven | |||||
| by multiple sources which would make the resulting signal undefined | |||||
| (maybe it makes sense for a javascript processor, I hear they love undefined) | |||||
| Similarily a circular dependency is not allowed a la | |||||
| #+begin_src scala | |||||
| val wireA = Wire(UInt(4.W)) | |||||
| val wireB = Wire(UInt(4.W)) | |||||
| wireA := wireB | |||||
| wireB := wireA | |||||
| #+end_src | |||||
| + Module | |||||
| In order to make development easier we separate functionality into modules, | |||||
| defined by its inputs and outputs. | |||||
| + Combinatory circuit | |||||
| A combinatory circuit is a circuit whose output is based only on its | |||||
| inputs. | |||||
| + Stateful circuit | |||||
| A circuit that will give different results based on its internal state. | |||||
| In common parlance, a circuit without registers (or memory) is combinatory | |||||
| while a circuit with registers is stateful. | |||||
| + Chisel Graph | |||||
| A chisel program is a program whose result is a graph which can be synthesized | |||||
| to a transistor level schematic of a logic circuit. | |||||
| When connecting wires wireA and wireB we were actually manipulating a graph | |||||
| (actually, two subgraphs that were eventually combined into one). | |||||
| The chisel graph is directed, but it does allow cycles so long as they are not | |||||
| combinatorial. | |||||
| * Your first component | * Your first component | ||||
| There are two types of digital components: Combinatorial and stateful. | |||||
| The first component we will consider is a simple combinatorial incrementor: | The first component we will consider is a simple combinatorial incrementor: | ||||
| #+begin_src scala | #+begin_src scala | ||||
| class myIncrement(incrementBy: Int) extends Module { | |||||
| val io = IO( | |||||
| new Bundle { | |||||
| val dataIn = Input(UInt(32.W)) | |||||
| val dataOut = Output(UInt(32.W)) | |||||
| } | |||||
| ) | |||||
| // These will be omitted in further examples | |||||
| package Ex0 | |||||
| import chisel3._ | |||||
| io.dataOut := io.dataIn + incrementBy.U | |||||
| #+end_src | |||||
| Let's break the code down down. First, myIncrement is a Module, meaning that | |||||
| this class can be instantiated as a hardware circuit. | |||||
| Figure [rm3] shows the model that you have just declared. | |||||
| A 32 bit signal, data_in goes in, and another 32 bit signal goes out. | |||||
| class myIncrement(incrementBy: Int) extends Module { | |||||
| val io = IO( | |||||
| new Bundle { | |||||
| val dataIn = Input(UInt(32.W)) | |||||
| val dataOut = Output(UInt(32.W)) | |||||
| } | |||||
| ) | |||||
| Apart from the IO, there is only one statement, assigning dataOut to dataIn + | |||||
| incrementBy. | |||||
| io.dataOut := io.dataIn + incrementBy.U | |||||
| } | |||||
| #+end_src | |||||
| In RTL the component looks like fig [rm4] | |||||
| TODO: Fig | |||||
| Let's see how we can use our module: | Let's see how we can use our module: | ||||
| #+begin_src scala | #+begin_src scala | ||||
| class myIncrementTwice(incrementBy: Int) extends Module { | |||||
| val io = IO( | |||||
| new Bundle { | |||||
| val dataIn = Input(UInt(32.W)) | |||||
| val dataOut = Output(UInt(32.W)) | |||||
| } | |||||
| ) | |||||
| class myIncrementTwice(incrementBy: Int) extends Module { | |||||
| val io = IO( | |||||
| new Bundle { | |||||
| val dataIn = Input(UInt(32.W)) | |||||
| val dataOut = Output(UInt(32.W)) | |||||
| } | |||||
| ) | |||||
| val first = Module(new myIncrement(incrementBy)) | |||||
| val second = Module(new myIncrement(incrementBy)) | |||||
| first.io.dataIn := io.dataIn | |||||
| second.io.dataIn := first.io.dataOut | |||||
| io.dataOut := second.io.dataOut | |||||
| } | |||||
| #+end_src | |||||
| val first = Module(new myIncrement(incrementBy)) | |||||
| val second = Module(new myIncrement(incrementBy)) | |||||
| * Scala and chisel | |||||
| The code for these snippets can be found in Example.scala in the test directory. | |||||
| You can run them using sbt by running ./sbt in your project root which will open | |||||
| your sbt console. | |||||
| first.io.dataIn := io.dataIn | |||||
| second.io.dataIn := first.io.dataOut | |||||
| A major stumbling block for learning chisel is understanding the difference between scala and chisel. | |||||
| To highlight the difference between the two consider how HTML is generated. | |||||
| io.dataOut := second.io.dataOut | |||||
| When creating a list we could just write the HTML manually | |||||
| #+begin_src html | |||||
| <ul> | |||||
| <li>Name: Siv Jensen, Affiliation: FrP</li> | |||||
| <li>Name: Jonas Gahr Støre, Affiliation: AP</li> | |||||
| <li>Name: Bjørnar Moxnes, Affiliation: Rødt</li> | |||||
| <li>Name: Malcolm Tucker, Affiliation: DOSAC</li> | |||||
| </ul> | |||||
| #+end_src | |||||
| However this is rather cumbersome, so we generate HTML programatically. | |||||
| In scala we might do something (sloppy) like this: | |||||
| #+begin_src scala | |||||
| def generateList(politicians: List[String], affiliations: Map[String, String]): String = { | |||||
| val inner = new ArrayBuffer[String]() | |||||
| for(ii <- 0 until politicians.size){ | |||||
| val nameString = politicians(ii) | |||||
| val affiliationString = affiliations(nameString) | |||||
| inner.add(s"<li>Name: $nameString, Affiliation: $affiliationString</li>") | |||||
| } | } | ||||
| "<ul>\n" + inner.mkString("\n") + "</ul>" | |||||
| } | |||||
| // Or if you prefer brevity | |||||
| def generateList2(politicians: List[String], affiliations: Map[String, String]): String = { | |||||
| val inner = politicians.map(p => s"<li>Name: $p, Affiliation ${affiliations(p)}</li>") | |||||
| "<ul>\n" + inner.mkString("\n") + "</ul>" | |||||
| } | |||||
| #+end_src | #+end_src | ||||
| Fig [rm5] shows the RTL design, as expected it's just two incrementors | |||||
| chained. | |||||
| Similarily we can use constructs such as for loops to manipulate the chisel graph: | |||||
| The following code shows off how you can use for loops to instantiate an | |||||
| arbitrary amount of modules. | |||||
| #+begin_src scala | #+begin_src scala | ||||
| class myIncrementN(incrementBy: Int, numIncrementors: Int) extends Module { | |||||
| val io = IO( | |||||
| new Bundle { | |||||
| val dataIn = Input(UInt(32.W)) | |||||
| val dataOut = Output(UInt(32.W)) | |||||
| } | |||||
| ) | |||||
| val incrementors = Array.fill(numIncrementors){ Module(new myIncrement(incrementBy)) } | |||||
| for(ii <- 1 until numIncrementors){ | |||||
| incrementors(ii).io.dataIn := incrementors(ii - 1).io.dataOut | |||||
| class myIncrementN(incrementBy: Int, numIncrementors: Int) extends Module { | |||||
| val io = IO( | |||||
| new Bundle { | |||||
| val dataIn = Input(UInt(32.W)) | |||||
| val dataOut = Output(UInt(32.W)) | |||||
| } | } | ||||
| incrementors(0).io.dataIn := io.dataIn | |||||
| io.dataOut := incrementors(numIncrementors).io.dataOut | |||||
| ) | |||||
| val incrementors = Array.fill(numIncrementors){ Module(new myIncrement(incrementBy)) } | |||||
| for(ii <- 1 until numIncrementors){ | |||||
| incrementors(ii).io.dataIn := incrementors(ii - 1).io.dataOut | |||||
| } | } | ||||
| incrementors(0).io.dataIn := io.dataIn | |||||
| io.dataOut := incrementors(numIncrementors).io.dataOut | |||||
| } | |||||
| #+end_src | #+end_src | ||||
| Keep in mind that the for-loop only exists at design time, just like a for loop | Keep in mind that the for-loop only exists at design time, just like a for loop | ||||
| generating a table in HTML will not be part of the finished HTML. | generating a table in HTML will not be part of the finished HTML. | ||||
| So, what does combinatorial mean? | |||||
| To answer that, let's create a stateful circuit first. | |||||
| *Important!* | |||||
| In the HTML examples differentiating the HTML and scala was easy because they're | |||||
| fundamentally very different. However with hardware and software there is a much | |||||
| larger overlap. | |||||
| A big pitfall is vector types and indexing, since these make sense both in software | |||||
| and in hardware. | |||||
| Here's a rather silly example highligthing the confusion: | |||||
| #+begin_src scala | |||||
| class MyVector() extends Module { | |||||
| val io = IO( | |||||
| new Bundle { | |||||
| val idx = Input(UInt(32.W)) | |||||
| val out = Output(UInt(32.W)) | |||||
| } | |||||
| ) | |||||
| val values = List(1, 2, 3, 4) | |||||
| io.out := values(io.idx) | |||||
| } | |||||
| #+end_src | |||||
| If you try to compile this you will get an error. | |||||
| #+begin_src scala | |||||
| sbt:chisel-module-template> compile | |||||
| ... | |||||
| [error] found : chisel3.core.UInt | |||||
| [error] required: Int | |||||
| [error] io.out := values(io.idx) | |||||
| [error] ^ | |||||
| #+end_src | |||||
| This error tells us that io.idx was of the wrong type, namely a chisel UInt. | |||||
| The List is a scala construct, it only exists when your design is synthesized, so | |||||
| attempting to index using a chisel type would be like HTML attempting to index the | |||||
| generating scala code which is nonsensical. | |||||
| Let's try again: | |||||
| #+begin_src scala | |||||
| class MyVector() extends Module { | |||||
| val io = IO( | |||||
| new Bundle { | |||||
| val idx = Input(UInt(32.W)) | |||||
| val out = Output(UInt(32.W)) | |||||
| } | |||||
| ) | |||||
| // val values: List[Int] = List(1, 2, 3, 4) | |||||
| val values = Vec(1, 2, 3, 4) | |||||
| io.out := values(io.idx) | |||||
| } | |||||
| #+end_src | |||||
| Egads, now we get this instead | |||||
| #+begin_src scala | |||||
| [error] /home/peteraa/datateknikk/TDT4255_EX0/src/main/scala/Tile.scala:30:16: inferred type arguments [Int] do not conform to macro method apply's type parameter bounds [T <: chisel3.Data] | |||||
| [error] val values = Vec(1, 2, 3, 4) | |||||
| [error] ^ | |||||
| [error] /home/peteraa/datateknikk/TDT4255_EX0/src/main/scala/Tile.scala:30:20: type mismatch; | |||||
| [error] found : Int(1) | |||||
| [error] required: T | |||||
| [error] val values = Vec(1, 2, 3, 4) | |||||
| ... | |||||
| #+end_src | |||||
| What is going wrong here? In the error message we see that the type Int cannot be constrained to a | |||||
| type T <: chisel3.Data, but what does that mean? | |||||
| The <: symbol means subtype, meaning that the compiler expected the Vec to contain a chisel data type | |||||
| such as chisel3.Data.UInt or chisel3.Data.Boolean, and Int is not one of them! | |||||
| A scala int represent 32 bits in memory, whereas a chisel UInt represents a bundle of wires that we | |||||
| interpret as an unsigned integer, thus they are not interchangeable although they represent roughly | |||||
| the same thing. | |||||
| Let's fix this | |||||
| #+begin_src scala | #+begin_src scala | ||||
| class myDelay() extends Module { | |||||
| val io = IO( | |||||
| new Bundle { | |||||
| val dataIn = Input(UInt(32.W)) | |||||
| val dataOut = Output(UInt(32.W)) | |||||
| } | |||||
| class MyVector() extends Module { | |||||
| val io = IO( | |||||
| new Bundle { | |||||
| val idx = Input(UInt(32.W)) | |||||
| val out = Output(UInt(32.W)) | |||||
| } | |||||
| ) | |||||
| val values = Vec(1.U, 2.U, 3.U, 4.U) | |||||
| // Alternatively | |||||
| // val values = Vec(List(1, 2, 3, 4).map(scalaInt => UInt(scalaInt))) | |||||
| io.out := values(io.idx) | |||||
| } | |||||
| #+end_src | |||||
| This works! | |||||
| So, it's impossible to access scala collections with chisel types, but can we do it the other way around? | |||||
| #+begin_src scala | |||||
| class MyVector() extends Module { | |||||
| val io = IO( | |||||
| new Bundle { | |||||
| val idx = Input(UInt(32.W)) | |||||
| val out = Output(UInt(32.W)) | |||||
| } | |||||
| ) | |||||
| val values = Vec(1.U, 2.U, 3.U, 4.U) | |||||
| io.out := values(3) | |||||
| } | |||||
| #+end_src | |||||
| ...turns out we can? | |||||
| This is nonsensical, however thanks to behind the scenes magic the 3 is changed | |||||
| to 3.U, much like [] can be a boolean in javascript. | |||||
| To get acquainted with the (rather barebones) testing environment, let's test this. | |||||
| #+begin_src scala | |||||
| class MyVecSpec extends FlatSpec with Matchers { | |||||
| behavior of "MyVec" | |||||
| it should "Output whatever idx points to" in { | |||||
| wrapTester( | |||||
| chisel3.iotesters.Driver(() => new MyVector) { c => | |||||
| new MyVecTester(c) | |||||
| } should be(true) | |||||
| ) | ) | ||||
| val delayReg = RegInit(UInt(32.W), 0.U) | |||||
| } | |||||
| } | |||||
| class MyVecTester(c: MyVector) extends PeekPokeTester(c) { | |||||
| for(ii <- 0 until 4){ | |||||
| poke(c.io.idx, ii) | |||||
| expect(c.io.out, ii) | |||||
| } | |||||
| } | |||||
| #+end_src | |||||
| #+begin_src | |||||
| sbt:chisel-module-template> testOnly Ex0.MyVecSpec | |||||
| ... | |||||
| ... | |||||
| [info] Compiling 1 Scala source to /home/peteraa/datateknikk/TDT4255_EX0/target/scala-2.12/test-classes ... | |||||
| ... | |||||
| ... | |||||
| MyVecSpec: | |||||
| MyVec | |||||
| [info] [0.001] Elaborating design... | |||||
| ... | |||||
| Circuit state created | |||||
| [info] [0.001] SEED 1556197694422 | |||||
| test MyVector Success: 4 tests passed in 5 cycles taking 0.009254 seconds | |||||
| [info] [0.002] RAN 0 CYCLES PASSED | |||||
| - should Output whatever idx points to | |||||
| Run completed in 605 milliseconds. | |||||
| Total number of tests run: 1 | |||||
| Suites: completed 1, aborted 0 | |||||
| Tests: succeeded 1, failed 0, canceled 0, ignored 0, pending 0 | |||||
| All tests passed. | |||||
| #+end_src | |||||
| Great! | |||||
| * Compile time and synthesis time | |||||
| In the HTML example, assume that we omitted the last </ul> tag. This would not | |||||
| create valid HTML, however the code will happily compile. Likewise, we can easily | |||||
| create invalid chisel: | |||||
| #+begin_src scala | |||||
| class Invalid() extends Module { | |||||
| val io = IO(new Bundle{}) | |||||
| val myVec = Module(new MyVector) | |||||
| } | |||||
| #+end_src | |||||
| This code will happily compile! | |||||
| Turns out that when compiling, we're not actually generating any chisel at all! | |||||
| Let's create a test that builds chisel code for us: | |||||
| #+begin_src scala | |||||
| class InvalidSpec extends FlatSpec with Matchers { | |||||
| behavior of "Invalid" | |||||
| it should "Probably fail in some sort of way" in { | |||||
| chisel3.iotesters.Driver(() => new Invalid) { c => | |||||
| // chisel tester expects a test here, but we can use ??? | |||||
| // which is shorthand for throw new NotImplementedException. | |||||
| // | |||||
| // This is OK, because it will fail during building. | |||||
| ??? | |||||
| } should be(true) | |||||
| } | |||||
| } | |||||
| #+end_src | |||||
| This gives us the rather scary error: | |||||
| delayReg := io.dataIn | |||||
| io.dataOut := delayReg | |||||
| #+begin_src scala | |||||
| sbt:chisel-module-template> compile | |||||
| ... | |||||
| [success] Total time: 3 s, completed Apr 25, 2019 3:15:15 PM | |||||
| ... | |||||
| sbt:chisel-module-template> testOnly Ex0.InvalidSpec | |||||
| ... | |||||
| firrtl.passes.CheckInitialization$RefNotInitializedException: @[Example.scala 25:21:@20.4] : [module Invalid] Reference myVec is not fully initialized. | |||||
| : myVec.io.idx <= VOID | |||||
| at firrtl.passes.CheckInitialization$.$anonfun$run$6(CheckInitialization.scala:83) | |||||
| at firrtl.passes.CheckInitialization$.$anonfun$run$6$adapted(CheckInitialization.scala:78) | |||||
| at scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:789) | |||||
| at scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:138) | |||||
| at scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:236) | |||||
| at scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:229) | |||||
| at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) | |||||
| at scala.collection.mutable.HashMap.foreach(HashMap.scala:138) | |||||
| at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:788) | |||||
| at firrtl.passes.CheckInitialization$.checkInitM$1(CheckInitialization.scala:78) | |||||
| #+end_src | |||||
| While scary, the actual error is only this line: | |||||
| #+begin_src scala | |||||
| firrtl.passes.CheckInitialization$RefNotInitializedException: @[Example.scala 25:21:@20.4] : [module Invalid] Reference myVec is not fully initialized. | |||||
| : myVec.io.idx <= VOID | |||||
| #+end_src | |||||
| Which tells us that myVec has unInitialized wires! | |||||
| While our program is correct, it produces an incorrect design, in other words, the scala part | |||||
| of the code is correct as it compiled, but the chisel part is incorrect because it does not synthesize. | |||||
| Let's fix it: | |||||
| #+begin_src scala | |||||
| class Invalid() extends Module { | |||||
| val io = IO(new Bundle{}) | |||||
| val myVec = Module(new MyVector) | |||||
| myVec.io.idx := 0.U | |||||
| } | |||||
| #+end_src | |||||
| Hooray, now we get `scala.NotImplementedError: an implementation is missing` | |||||
| as expected, along with an enormous stacktrace.. | |||||
| The observant reader may have observed that it is perfectly legal to put chisel types in scala | |||||
| collection, how does that work? | |||||
| A scala collection is just a collection of references, or pointers if you will. | |||||
| If it happens to contain values of chisel types then these will exist in the design, however the | |||||
| collection will not, so we cannot index based on the collection. | |||||
| This can be seen in `myIncrementN` where an array of incrementors is used. | |||||
| The array is only used help the scala program wire the components together, and once this is | |||||
| done the array is not used. | |||||
| We could do the same with MyVector, but it's not pretty: | |||||
| #+begin_src scala | |||||
| class MyVector2() extends Module { | |||||
| val io = IO( | |||||
| new Bundle { | |||||
| val idx = Input(UInt(32.W)) | |||||
| val out = Output(UInt(32.W)) | |||||
| } | |||||
| ) | |||||
| val values = Array(0.U, 1.U, 2.U, 3.U) | |||||
| io.out := values(0) | |||||
| for(ii <- 0 until 3){ | |||||
| when(io.idx === ii.U){ | |||||
| io.out := values(ii) | |||||
| } | |||||
| } | } | ||||
| } | |||||
| #+end_src | |||||
| Note that it is nescessary to specify a default for io.out even though it will never be | |||||
| selected. | |||||
| While it looks ugly, the generated hardware should, at least in theory, not take up any | |||||
| more space or run any slower than the Vec based implementation, save for one difference | |||||
| as we will see in the next section. | |||||
| * Bit Widths | |||||
| What happens if we attempt to index the 6th element in our 4 element vector? | |||||
| In MyVector we get 1, and in MyVector2 we get 0, so they're not exactly the same. | |||||
| In MyVector the Vec has 4 elements, thus only two wires are necessary (00, 01, 10, 11), | |||||
| thus the remaining 28 wires of io.idx are not used. | |||||
| In MyVector2 on the other hand we have specified a default value for io.out, so for any | |||||
| index higher than 3 the output will be 0. | |||||
| What about the values in the Vec? | |||||
| 0.U can be represented by a single wire, whereas 3.U must be represented by at | |||||
| least two wires. | |||||
| In this case it is easy for chisel to see that they must both be of width 32 since they will | |||||
| be driving the output signal which is specified as 32 bit wide. | |||||
| In theory specifying widths should not be necessary other than at the very endpoints of your | |||||
| design, however this would quickly end up being intractable, so we specify widths at module | |||||
| endpoints. | |||||
| * Stateful circuits | |||||
| #+begin_src scala | |||||
| class SimpleDelay() extends Module { | |||||
| val io = IO( | |||||
| new Bundle { | |||||
| val dataIn = Input(UInt(32.W)) | |||||
| val dataOut = Output(UInt(32.W)) | |||||
| } | |||||
| ) | |||||
| val delayReg = RegInit(UInt(32.W), 0.U) | |||||
| delayReg := io.dataIn | |||||
| io.dataOut := delayReg | |||||
| } | |||||
| #+end_src | #+end_src | ||||
| This circuit seems rather pointless, it simply assigns the input to the output. | This circuit seems rather pointless, it simply assigns the input to the output. | ||||
| However, the register has another input, as seen in the RTL: fig [rm6]. | |||||
| The register can only change value during rising edges on the clock! | |||||
| However, unlike the previous circuits, the simpleDelay circuit stores its value | |||||
| in a register, causing a one cycle delay between input and output. | |||||
| Lets test this | |||||
| #+begin_src scala | |||||
| class DelaySpec extends FlatSpec with Matchers { | |||||
| behavior of "SimpleDelay" | |||||
| it should "Delay input by one timestep" in { | |||||
| chisel3.iotesters.Driver(() => new SimpleDelay) { c => | |||||
| new DelayTester(c) | |||||
| } should be(true) | |||||
| } | |||||
| } | |||||
| To examplify, assume at step 0 data in is 0x45. | |||||
| delayReg will now have 0x45 as its data in, but data out will still be 0. | |||||
| Only when the clock ticks will delayReg.dataOut take on the value 0x45. | |||||
| class DelayTester(c: SimpleDelay) extends PeekPokeTester(c) { | |||||
| for(ii <- 0 until 10){ | |||||
| val input = scala.util.Random.nextInt(10) | |||||
| poke(c.io.dataIn, input) | |||||
| expect(c.io.dataOut, input) | |||||
| } | |||||
| } | |||||
| #+end_src | |||||
| Lets test it | |||||
| #+begin_src | |||||
| sbt:chisel-module-template> testOnly Ex0.DelaySpec | |||||
| ... | |||||
| [info] [0.001] Elaborating design... | |||||
| [info] [0.071] Done elaborating. | |||||
| Total FIRRTL Compile Time: 144.7 ms | |||||
| Total FIRRTL Compile Time: 9.4 ms | |||||
| End of dependency graph | |||||
| Circuit state created | |||||
| [info] [0.001] SEED 1556196281084 | |||||
| [info] [0.002] EXPECT AT 0 io_dataOut got 0 expected 7 FAIL | |||||
| [info] [0.002] EXPECT AT 0 io_dataOut got 0 expected 6 FAIL | |||||
| [info] [0.003] EXPECT AT 0 io_dataOut got 0 expected 1 FAIL | |||||
| [info] [0.003] EXPECT AT 0 io_dataOut got 0 expected 2 FAIL | |||||
| [info] [0.003] EXPECT AT 0 io_dataOut got 0 expected 7 FAIL | |||||
| [info] [0.003] EXPECT AT 0 io_dataOut got 0 expected 4 FAIL | |||||
| [info] [0.003] EXPECT AT 0 io_dataOut got 0 expected 8 FAIL | |||||
| [info] [0.003] EXPECT AT 0 io_dataOut got 0 expected 8 FAIL | |||||
| [info] [0.003] EXPECT AT 0 io_dataOut got 0 expected 7 FAIL | |||||
| #+end_src | |||||
| Oops, the tester doesn't advance the clock befor testing output, totally didn't | |||||
| make an error on purpose to highlight that... | |||||
| #+begin_src scala | |||||
| class DelayTester(c: SimpleDelay) extends PeekPokeTester(c) { | |||||
| for(ii <- 0 until 10){ | |||||
| val input = scala.util.Random.nextInt(10) | |||||
| poke(c.io.dataIn, input) | |||||
| step(1) | |||||
| expect(c.io.dataOut, input) | |||||
| } | |||||
| } | |||||
| #+end_src | |||||
| Much better.. | |||||
| You should now be able to implement myDelayN following the same principles as | You should now be able to implement myDelayN following the same principles as | ||||
| myIncrementN | myIncrementN | ||||
| #+begin_src scala | #+begin_src scala | ||||
| class myDelayN(delay: Int) extends Module { | |||||
| val io = IO( | |||||
| new Bundle { | |||||
| val dataIn = Input(UInt(32.W)) | |||||
| val dataOut = Output(UInt(32.W)) | |||||
| } | |||||
| ) | |||||
| ??? | |||||
| } | |||||
| class myDelayN(delay: Int) extends Module { | |||||
| val io = IO( | |||||
| new Bundle { | |||||
| val dataIn = Input(UInt(32.W)) | |||||
| val dataOut = Output(UInt(32.W)) | |||||
| } | |||||
| ) | |||||
| ??? | |||||
| } | |||||
| #+end_src | #+end_src | ||||
| This should answer the initial question of combinatorial vs stateful: | This should answer the initial question of combinatorial vs stateful: | ||||
| @@ -274,3 +730,4 @@ | |||||
| matrix multiplier. | matrix multiplier. | ||||
| Why did this happen, and how could this have been avoided? | Why did this happen, and how could this have been avoided? | ||||
| @@ -1,56 +0,0 @@ | |||||
| package Core | |||||
| import chisel3._ | |||||
| import chisel3.core.Input | |||||
| import chisel3.iotesters.PeekPokeTester | |||||
| object Extras { | |||||
| def somefun(someval: Int) : Unit = {} | |||||
| val vecA = List(1, 2, 4) | |||||
| val vecB = List(2, -3, 1) | |||||
| def dotProductForLoop(vecA: List[Int], vecB: List[Int]) = { | |||||
| var dotProduct = 0 | |||||
| for(i <- 0 until vecA.length){ | |||||
| dotProduct = dotProduct + (vecA(i) * vecB(i)) | |||||
| } | |||||
| dotProduct | |||||
| } | |||||
| // If you prefer a functional style scala has excellent support. | |||||
| val dotProductFP = (vecA zip vecB) | |||||
| .map{ case(a, b) => a*b } | |||||
| .sum | |||||
| val fancyDotProduct = (vecA zip vecB) | |||||
| .foldLeft(0){ case(acc, ab) => acc + (ab._1 * ab._2) } | |||||
| // Scala gives you ample opportunity to write unreadable code. | |||||
| // This is not good code!!! | |||||
| val tooFancyDotProduct = | |||||
| (0 /: (vecA zip vecB)){ case(acc, ab) => acc + (ab._1 * ab._2) } | |||||
| type Matrix[A] = List[List[A]] | |||||
| def vectorMatrixMultiply(vec: List[Int], matrix: Matrix[Int]): List[Int] = { | |||||
| val transposed = matrix.transpose | |||||
| val outputVector = Array.ofDim[Int](vec.length) | |||||
| for(ii <- 0 until matrix.length){ | |||||
| outputVector(ii) = dotProductForLoop(vec, transposed(ii)) | |||||
| } | |||||
| outputVector.toList | |||||
| } | |||||
| val vec = List(1, 0, 1) | |||||
| val matrix = List( | |||||
| List(2, 1, 2), | |||||
| List(3, 2, 3), | |||||
| List(4, 1, 1) | |||||
| ) | |||||
| println(vectorMatrixMultiply(vec, matrix)) | |||||
| } | |||||
| @@ -1,161 +0,0 @@ | |||||
| package Core | |||||
| import chisel3._ | |||||
| import chisel3.core.Input | |||||
| import chisel3.iotesters.PeekPokeTester | |||||
| class myIncrement(incrementBy: Int) extends Module { | |||||
| val io = IO( | |||||
| new Bundle { | |||||
| val dataIn = Input(UInt(32.W)) | |||||
| val dataOut = Output(UInt(32.W)) | |||||
| } | |||||
| ) | |||||
| io.dataOut := io.dataIn + incrementBy.U | |||||
| } | |||||
| class myIncrementTwice(incrementBy: Int) extends Module { | |||||
| val io = IO( | |||||
| new Bundle { | |||||
| val dataIn = Input(UInt(32.W)) | |||||
| val dataOut = Output(UInt(32.W)) | |||||
| } | |||||
| ) | |||||
| val first = Module(new myIncrement(incrementBy)) | |||||
| val second = Module(new myIncrement(incrementBy)) | |||||
| first.io.dataIn := io.dataIn | |||||
| second.io.dataIn := first.io.dataOut | |||||
| io.dataOut := second.io.dataOut | |||||
| } | |||||
| class myIncrementN(incrementBy: Int, numIncrementors: Int) extends Module { | |||||
| val io = IO( | |||||
| new Bundle { | |||||
| val dataIn = Input(UInt(32.W)) | |||||
| val dataOut = Output(UInt(32.W)) | |||||
| } | |||||
| ) | |||||
| val incrementors = Array.fill(numIncrementors){ Module(new myIncrement(incrementBy)) } | |||||
| for(ii <- 1 until numIncrementors){ | |||||
| incrementors(ii).io.dataIn := incrementors(ii - 1).io.dataOut | |||||
| } | |||||
| incrementors(0).io.dataIn := io.dataIn | |||||
| io.dataOut := incrementors(numIncrementors).io.dataOut | |||||
| } | |||||
| class myDelay() extends Module { | |||||
| val io = IO( | |||||
| new Bundle { | |||||
| val dataIn = Input(UInt(32.W)) | |||||
| val dataOut = Output(UInt(32.W)) | |||||
| } | |||||
| ) | |||||
| val reg = RegInit(UInt(32.W), 0.U) | |||||
| reg := io.dataIn | |||||
| io.dataOut := reg | |||||
| } | |||||
| class myDelayN(steps: Int) extends Module { | |||||
| val io = IO( | |||||
| new Bundle { | |||||
| val dataIn = Input(UInt(32.W)) | |||||
| val dataOut = Output(UInt(32.W)) | |||||
| } | |||||
| ) | |||||
| val delayers = Array.fill(steps){ Module(new myDelay()) } | |||||
| for(ii <- 1 until steps){ | |||||
| delayers(ii).io.dataIn := delayers(ii - 1).io.dataOut | |||||
| } | |||||
| delayers(0).io.dataIn := io.dataIn | |||||
| io.dataOut := delayers(steps).io.dataOut | |||||
| } | |||||
| class mySelector(numValues: Int) extends Module { | |||||
| val io = IO( | |||||
| new Bundle { | |||||
| val next = Input(Bool()) | |||||
| val dataOut = Output(UInt(32.W)) | |||||
| val newOutput = Output(Bool()) | |||||
| } | |||||
| ) | |||||
| val counter = RegInit(UInt(Chisel.log2Up(numValues).W), 0.U) | |||||
| val nextOutputIsFresh = RegInit(Bool(), true.B) | |||||
| /** | |||||
| Generate random values. Using the when keyword we choose which random | |||||
| value should drive the dataOut signal | |||||
| */ | |||||
| io.dataOut := 0.U | |||||
| List.fill(numValues)(scala.util.Random.nextInt(100)).zipWithIndex.foreach { | |||||
| case(rand, idx) => | |||||
| when(counter === idx.U){ | |||||
| if(rand < 50) | |||||
| io.dataOut := rand.U | |||||
| else | |||||
| io.dataOut := (rand + 100).U | |||||
| } | |||||
| } | |||||
| /** | |||||
| While chisel comes with an inbuilt Counter, we implement ours the old fashion way | |||||
| There are far more elegant ways of implementing this, read the chisel docs, discuss | |||||
| best practice among yourselves and experiment! | |||||
| */ | |||||
| nextOutputIsFresh := true.B | |||||
| when(io.next === true.B){ | |||||
| when(counter < (numValues - 1).U){ | |||||
| counter := counter + 1.U | |||||
| }.otherwise { | |||||
| counter := 0.U | |||||
| } | |||||
| }.otherwise { | |||||
| nextOutputIsFresh := false.B | |||||
| } | |||||
| io.newOutput := nextOutputIsFresh | |||||
| } | |||||
| class mySelectorTest(c: mySelector) extends PeekPokeTester(c) { | |||||
| poke(c.io.next, true.B) | |||||
| for(ii <- 0 until 10){ | |||||
| val wasStale = peek(c.io.newOutput) == 0 | |||||
| val output = peek(c.io.dataOut).toString() | |||||
| println(s"at step $ii:") | |||||
| println(s"data out is $output") | |||||
| println(s"was the output fresh? ${!wasStale}") | |||||
| println() | |||||
| step(1) | |||||
| } | |||||
| poke(c.io.next, false.B) | |||||
| for(ii <- 0 until 3){ | |||||
| val wasStale = peek(c.io.newOutput) == 0 | |||||
| val output = peek(c.io.dataOut).toString() | |||||
| println(s"at step $ii:") | |||||
| println(s"data out is $output") | |||||
| println(s"was the output fresh? ${!wasStale}") | |||||
| println() | |||||
| step(1) | |||||
| } | |||||
| } | |||||
| @@ -1,47 +0,0 @@ | |||||
| package Core | |||||
| import chisel3._ | |||||
| import chisel3.core.Input | |||||
| import chisel3.util.Counter | |||||
| /** | |||||
| DaisyVectors are not indexed. They have no control inputs or outputs, only data. | |||||
| */ | |||||
| class daisyDot(elements: Int, dataWidth: Int) extends Module{ | |||||
| val io = IO(new Bundle { | |||||
| val dataInA = Input(UInt(dataWidth.W)) | |||||
| val dataInB = Input(UInt(dataWidth.W)) | |||||
| val dataOut = Output(UInt(dataWidth.W)) | |||||
| val outputValid = Output(Bool()) | |||||
| }) | |||||
| /** | |||||
| Keep track of how many elements have been accumulated. As the interface has no | |||||
| indicator that data can be invalid it should always be assumed that data IS valid. | |||||
| This in turn means that the counter should tick on every cycle | |||||
| */ | |||||
| val counter = Counter(elements) | |||||
| val accumulator = RegInit(UInt(dataWidth.W), 0.U) | |||||
| /** | |||||
| Your implementation here | |||||
| */ | |||||
| // Increment the value of the accumulator with the product of data in A and B | |||||
| // When the counter reaches elements set output valid to true and flush the accumulator | |||||
| /** | |||||
| LF | |||||
| */ | |||||
| val product = io.dataInA * io.dataInB | |||||
| when(counter.inc()){ | |||||
| io.outputValid := true.B | |||||
| accumulator := 0.U | |||||
| }.otherwise{ | |||||
| io.outputValid := false.B | |||||
| accumulator := accumulator + product | |||||
| } | |||||
| io.dataOut := accumulator + product | |||||
| } | |||||
| @@ -1,44 +0,0 @@ | |||||
| package Core | |||||
| import chisel3._ | |||||
| import chisel3.core.Input | |||||
| import chisel3.iotesters.PeekPokeTester | |||||
| import utilz._ | |||||
| /** | |||||
| DaisyGrids hold n daisyVecs. Unlike the daisyVecs, daisyGrids have a select signal for selecting | |||||
| which daisyVec to work on, but these daisyVecs can not be controlled from the outside. | |||||
| */ | |||||
| class daisyGrid(dims: Dims, dataWidth: Int) extends Module{ | |||||
| val io = IO(new Bundle { | |||||
| val writeEnable = Input(Bool()) | |||||
| val dataIn = Input(UInt(dataWidth.W)) | |||||
| val rowSelect = Input(UInt(8.W)) | |||||
| val dataOut = Output(UInt(dataWidth.W)) | |||||
| }) | |||||
| val rows = Array.fill(dims.rows){ Module(new daisyVector(dims.cols, dataWidth)).io } | |||||
| /** | |||||
| Your implementation here | |||||
| */ | |||||
| /** | |||||
| LF | |||||
| */ | |||||
| io.dataOut := 0.U | |||||
| for(ii <- 0 until dims.rows){ | |||||
| rows(ii).writeEnable := 0.U | |||||
| rows(ii).dataIn := io.dataIn | |||||
| when(io.rowSelect === ii.U ){ | |||||
| rows(ii).writeEnable := io.writeEnable | |||||
| io.dataOut := rows(ii).dataOut | |||||
| } | |||||
| } | |||||
| } | |||||
| @@ -1,127 +0,0 @@ | |||||
| package Core | |||||
| import chisel3._ | |||||
| import chisel3.core.Input | |||||
| import chisel3.iotesters.PeekPokeTester | |||||
| import utilz._ | |||||
| /** | |||||
| The daisy multiplier creates two daisy grids, one transposed, and multiplies them. | |||||
| */ | |||||
| class daisyMultiplier(dims: Dims, dataWidth: Int) extends Module { | |||||
| val io = IO(new Bundle { | |||||
| val dataInA = Input(UInt(dataWidth.W)) | |||||
| val writeEnableA = Input(Bool()) | |||||
| val dataInB = Input(UInt(dataWidth.W)) | |||||
| val writeEnableB = Input(Bool()) | |||||
| val dataOut = Output(UInt(dataWidth.W)) | |||||
| val dataValid = Output(Bool()) | |||||
| val done = Output(Bool()) | |||||
| }) | |||||
| /** | |||||
| Your implementation here | |||||
| */ | |||||
| val rowCounter = RegInit(UInt(8.W), 0.U) | |||||
| val colCounter = RegInit(UInt(8.W), 0.U) | |||||
| val rowOutputCounter = RegInit(UInt(8.W), 0.U) | |||||
| val calculating = RegInit(Bool(), false.B) | |||||
| val accumulator = RegInit(UInt(8.W), 0.U) | |||||
| val resultReady = RegInit(Bool(), false.B) | |||||
| /** | |||||
| Following the same principle behind the the vector matrix multiplication, by | |||||
| NOT transposing the dimensions. | |||||
| When writing a multiplier for a 3x2 matrix it's implicit that this means a | |||||
| 3x2 matrix and 2x3, returning a 2x2 matrix. By not transposing the dimensions | |||||
| we get the same effect as in VecMat | |||||
| */ | |||||
| val matrixA = Module(new daisyGrid(dims, dataWidth)).io | |||||
| val matrixB = Module(new daisyGrid(dims, dataWidth)).io | |||||
| matrixA.dataIn := io.dataInA | |||||
| matrixA.writeEnable := io.writeEnableA | |||||
| matrixB.dataIn := io.dataInB | |||||
| matrixB.writeEnable := io.writeEnableB | |||||
| //////////////////////////////////////// | |||||
| //////////////////////////////////////// | |||||
| /// Set up counter statemachine | |||||
| io.done := false.B | |||||
| when(colCounter === (dims.cols - 1).U){ | |||||
| colCounter := 0.U | |||||
| when(rowCounter === (dims.rows - 1).U){ | |||||
| rowCounter := 0.U | |||||
| calculating := true.B | |||||
| when(calculating === true.B){ | |||||
| when(rowOutputCounter === (dims.rows - 1).U){ | |||||
| io.done := true.B | |||||
| }.otherwise{ | |||||
| rowOutputCounter := rowOutputCounter + 1.U | |||||
| } | |||||
| } | |||||
| }.otherwise{ | |||||
| rowCounter := rowCounter + 1.U | |||||
| } | |||||
| }.otherwise{ | |||||
| colCounter := colCounter + 1.U | |||||
| } | |||||
| //////////////////////////////////////// | |||||
| //////////////////////////////////////// | |||||
| /// set up reading patterns depending on if we are in calculating state or not | |||||
| when(calculating === true.B){ | |||||
| matrixA.rowSelect := rowOutputCounter | |||||
| }.otherwise{ | |||||
| matrixA.rowSelect := rowCounter | |||||
| } | |||||
| matrixB.rowSelect := rowCounter | |||||
| //////////////////////////////////////// | |||||
| //////////////////////////////////////// | |||||
| /// when we're in calculating mode, check if we have valid output | |||||
| resultReady := false.B | |||||
| io.dataValid := false.B | |||||
| when(calculating === true.B){ | |||||
| when(colCounter === (dims.cols - 1).U){ | |||||
| resultReady := true.B | |||||
| } | |||||
| } | |||||
| //////////////////////////////////////// | |||||
| //////////////////////////////////////// | |||||
| /// when we've got a result ready we need to flush the accumulator | |||||
| when(resultReady === true.B){ | |||||
| // To flush our accumulator we simply disregard previous state | |||||
| accumulator := (matrixA.dataOut*matrixB.dataOut) | |||||
| io.dataValid := true.B | |||||
| }.otherwise{ | |||||
| accumulator := accumulator + (matrixA.dataOut*matrixB.dataOut) | |||||
| } | |||||
| io.dataOut := accumulator | |||||
| } | |||||
| @@ -1,55 +0,0 @@ | |||||
| package Core | |||||
| import chisel3._ | |||||
| import chisel3.core.Input | |||||
| import chisel3.iotesters.PeekPokeTester | |||||
| /** | |||||
| DaisyVectors are not indexed externally. They have no control inputs or outputs, only data. | |||||
| */ | |||||
| class daisyVector(elements: Int, dataWidth: Int) extends Module{ | |||||
| val io = IO(new Bundle { | |||||
| val writeEnable = Input(Bool()) | |||||
| val dataIn = Input(UInt(dataWidth.W)) | |||||
| val dataOut = Output(UInt(dataWidth.W)) | |||||
| }) | |||||
| /** | |||||
| although the vector is not accessible by index externally, an internal index is necessary | |||||
| It is initialized to the value 0 | |||||
| */ | |||||
| val currentIndex = RegInit(UInt(8.W), 0.U) | |||||
| val memory = Array.fill(elements)(RegInit(UInt(dataWidth.W), 0.U)) | |||||
| /** | |||||
| Your implementation here | |||||
| */ | |||||
| // Cycle the currentIndex register, it should be equal to the current (cycle % elements) | |||||
| // Connect the selected output to io.dataOut | |||||
| // Connect writeEnable to the selected memory (selectable with memory(currentIndex)) | |||||
| /** | |||||
| LF | |||||
| */ | |||||
| when(currentIndex === (elements - 1).U ){ | |||||
| currentIndex := 0.U | |||||
| }.otherwise{ | |||||
| currentIndex := currentIndex + 1.U | |||||
| } | |||||
| io.dataOut := 0.U | |||||
| for(ii <- 0 until elements){ | |||||
| when(currentIndex === ii.U){ | |||||
| when(io.writeEnable === true.B){ | |||||
| memory(ii) := io.dataIn | |||||
| } | |||||
| io.dataOut := memory(ii) | |||||
| } | |||||
| } | |||||
| } | |||||
| @@ -1,150 +0,0 @@ | |||||
| package Core | |||||
| import Core.daisyVector | |||||
| import chisel3._ | |||||
| import chisel3.core.Input | |||||
| import chisel3.iotesters.PeekPokeTester | |||||
| import chisel3.util.Counter | |||||
| import utilz._ | |||||
| /** | |||||
| The daisy multiplier creates two daisy grids, one transposed, and multiplies them. | |||||
| */ | |||||
| class daisyVecMat(matrixDims: Dims, dataWidth: Int) extends Module { | |||||
| val io = IO( | |||||
| new Bundle { | |||||
| val dataInA = Input(UInt(dataWidth.W)) | |||||
| val writeEnableA = Input(Bool()) | |||||
| val dataInB = Input(UInt(dataWidth.W)) | |||||
| val writeEnableB = Input(Bool()) | |||||
| val dataOut = Output(UInt(dataWidth.W)) | |||||
| val dataValid = Output(Bool()) | |||||
| val done = Output(Bool()) | |||||
| } | |||||
| ) | |||||
| /** | |||||
| The dimensions are transposed because this is a vector * matrix multiplication | |||||
| [1, 2] | |||||
| [a, b, c] x [3, 4] | |||||
| [5, 6] | |||||
| Here the vector will output a, b, c, a, b, c, a... | |||||
| The Matrix is the type you made last exercise, so it is actually just 3 more vectors | |||||
| of length 2. In cycle 0 the values {1, 3, 5} may be selected, in cycle 1 {2, 4, 6} | |||||
| can be selected. | |||||
| However, you can make up for the impedance mismatch by transposing the matrix, storing | |||||
| the data in 2 vectors of length 3 instead. | |||||
| In memory matrixB will look like [1, 3, 5] | |||||
| [2, 4, 6] | |||||
| For a correct result, it is up to the user to input the data for matrixB in a transposed | |||||
| manner. This is done in the tests, you don't need to worry about it. | |||||
| */ | |||||
| val dims = matrixDims.transposed | |||||
| // basic linAlg | |||||
| val lengthA = dims.cols | |||||
| val vecA = Module(new daisyVector(lengthA, dataWidth)).io | |||||
| val matrixB = Module(new daisyGrid(dims, dataWidth)).io | |||||
| val dotProductCalculator = Module(new daisyDot(lengthA, dataWidth)).io | |||||
| val dataIsLoaded = RegInit(Bool(), false.B) | |||||
| /** | |||||
| Your implementation here | |||||
| */ | |||||
| // Create counters to keep track of when the matrix and vector has gotten all the data. | |||||
| // You can assume that writeEnable will be synchronized with the vectors. I.e for a vector | |||||
| // of length 3 writeEnable can only go from true to false and vice versa at T = 0, 3, 6, 9 etc | |||||
| // Create counters to keep track of how far along the computation is. | |||||
| // Set up the correct rowSelect for matrixB | |||||
| // Wire up write enables for matrixB and vecA | |||||
| /** | |||||
| In the solution I used the following to keep track of state | |||||
| You can use these if you want to, or do it however you see fit. | |||||
| */ | |||||
| // val currentCol = Counter(dims.cols) | |||||
| // val rowSel = Counter(dims.rows) | |||||
| // val aReady = RegInit(Bool(), false.B) | |||||
| // val bReady = RegInit(Bool(), false.B) | |||||
| // val isDone = RegInit(Bool(), false.B) | |||||
| // val (inputCounterB, counterBWrapped) = Counter(io.writeEnableB, (dims.elements) - 1) | |||||
| // val (numOutputted, numOutputtedWrapped) = Counter(dataValid, lengthA) | |||||
| // val (inputCounterA, counterAWrapped) = Counter(io.writeEnableA, lengthA - 1) | |||||
| /** | |||||
| LF | |||||
| */ | |||||
| val dataValid = Wire(Bool()) | |||||
| //////////////////////////////////////// | |||||
| //////////////////////////////////////// | |||||
| /// Wire components | |||||
| vecA.dataIn := io.dataInA | |||||
| vecA.writeEnable := io.writeEnableA | |||||
| matrixB.dataIn := io.dataInB | |||||
| matrixB.writeEnable := io.writeEnableB | |||||
| io.dataOut := dotProductCalculator.dataOut | |||||
| // allows us to use dataValid internally | |||||
| io.dataValid := dataValid | |||||
| dotProductCalculator.dataInA := vecA.dataOut | |||||
| dotProductCalculator.dataInB := matrixB.dataOut | |||||
| dataValid := dotProductCalculator.outputValid & dataIsLoaded | |||||
| //////////////////////////////////////// | |||||
| //////////////////////////////////////// | |||||
| /// Select the correct row | |||||
| val currentCol = Counter(dims.cols) | |||||
| val rowSel = Counter(dims.rows) | |||||
| when(currentCol.inc()){ | |||||
| rowSel.inc() | |||||
| } | |||||
| matrixB.rowSelect := rowSel.value | |||||
| //////////////////////////////////////// | |||||
| //////////////////////////////////////// | |||||
| /// Check if data is loaded | |||||
| val aReady = RegInit(Bool(), false.B) | |||||
| val bReady = RegInit(Bool(), false.B) | |||||
| val (inputCounterA, counterAWrapped) = Counter(io.writeEnableA, lengthA - 1) | |||||
| when(counterAWrapped){ aReady := true.B } | |||||
| val (inputCounterB, counterBWrapped) = Counter(io.writeEnableB, (dims.elements) - 1) | |||||
| when(counterBWrapped){ bReady := true.B } | |||||
| dataIsLoaded := aReady & bReady | |||||
| //////////////////////////////////////// | |||||
| //////////////////////////////////////// | |||||
| /// Check if we're done | |||||
| val isDone = RegInit(Bool(), false.B) | |||||
| val (numOutputted, numOutputtedWrapped) = Counter(dataValid, lengthA) | |||||
| when(numOutputtedWrapped){ isDone := true.B } | |||||
| io.done := isDone | |||||
| } | |||||
| @@ -1,40 +0,0 @@ | |||||
| package Core | |||||
| import chisel3._ | |||||
| import chisel3.core.Input | |||||
| import chisel3.iotesters.PeekPokeTester | |||||
| import chisel3.util.Counter | |||||
| /** | |||||
| DaisyVectors are not indexed. They have no control inputs or outputs, only data. | |||||
| */ | |||||
| class daisyVecVec(elements: Int, dataWidth: Int) extends Module{ | |||||
| val io = IO(new Bundle { | |||||
| val dataInA = Input(UInt(dataWidth.W)) | |||||
| val dataInB = Input(UInt(dataWidth.W)) | |||||
| val dataOut = Output(UInt(dataWidth.W)) | |||||
| val outputValid = Output(Bool()) | |||||
| }) | |||||
| val counter = Counter(elements) | |||||
| val accumulator = RegInit(UInt(dataWidth.W), 0.U) | |||||
| /** | |||||
| Your implementation here | |||||
| */ | |||||
| /** | |||||
| LF | |||||
| */ | |||||
| val product = io.dataInA * io.dataInB | |||||
| when(counter.inc()){ | |||||
| io.outputValid := true.B | |||||
| accumulator := 0.U | |||||
| }.otherwise{ | |||||
| io.outputValid := false.B | |||||
| accumulator := accumulator + product | |||||
| } | |||||
| io.dataOut := accumulator + product | |||||
| } | |||||
| @@ -1,45 +0,0 @@ | |||||
| package Core | |||||
| object utilz { | |||||
| type Matrix = List[List[Int]] | |||||
| def genMatrix(dims: Dims): Matrix = | |||||
| List.fill(dims.rows)( | |||||
| List.fill(dims.cols)(scala.util.Random.nextInt(5)) | |||||
| ) | |||||
| case class Dims(rows: Int, cols: Int){ | |||||
| val elements = rows*cols | |||||
| def transposed = Dims(cols, rows) | |||||
| } | |||||
| def printVector(v: List[Int]): String = | |||||
| v.mkString("[","\t","]") | |||||
| def printMatrix(m: List[List[Int]]): String = | |||||
| m.map(printVector).mkString("\n") | |||||
| /** | |||||
| Typically I'd fix the signature to Map[A,B] | |||||
| Prints all the IOs of a Module | |||||
| ex: | |||||
| ``` | |||||
| CycleTask[daisyVecMat]( | |||||
| 10, | |||||
| _ => println(s"at step $n"), | |||||
| d => println(printModuleIO(d.peek(d.dut.io))), | |||||
| ) | |||||
| ``` | |||||
| */ | |||||
| def printModuleIO[A,B](m: scala.collection.mutable.LinkedHashMap[A,B]): String = | |||||
| m.toList.map{ case(x,y) => "" + x.toString() + " -> " + y.toString() }.reverse.mkString("\n") | |||||
| def dotProduct(xs: List[Int], ys: List[Int]): Int = | |||||
| (for ((x, y) <- xs zip ys) yield x * y).sum | |||||
| def matrixMultiply(ma: Matrix, mb: Matrix): Matrix = | |||||
| ma.map(mav => mb.transpose.map(mbv => dotProduct(mav,mbv))) | |||||
| } | |||||
| @@ -1,82 +0,0 @@ | |||||
| package Core | |||||
| import chisel3._ | |||||
| import chisel3.iotesters._ | |||||
| import org.scalatest.{Matchers, FlatSpec} | |||||
| import testUtils._ | |||||
| class daisyDotSpec extends FlatSpec with Matchers { | |||||
| behavior of "daisy vector" | |||||
| it should "Only signal valid output at end of calculation" in { | |||||
| val ins = (0 to 20).map(ii => | |||||
| CycleTask[daisyDot]( | |||||
| ii, | |||||
| d => d.poke(d.dut.io.dataInA, 0), | |||||
| d => d.poke(d.dut.io.dataInB, 0), | |||||
| d => d.expect(d.dut.io.outputValid, if((ii % 3) == 2) 1 else 0), | |||||
| ) | |||||
| ) | |||||
| iotesters.Driver.execute(() => new daisyDot(3, 32), new TesterOptionsManager) { c => | |||||
| IoSpec[daisyDot](ins, c).myTester | |||||
| } should be(true) | |||||
| } | |||||
| it should "Be able to count to 3" in { | |||||
| val ins = (0 to 20).map(ii => | |||||
| CycleTask[daisyDot]( | |||||
| ii, | |||||
| d => d.poke(d.dut.io.dataInA, 1), | |||||
| d => d.poke(d.dut.io.dataInB, 1), | |||||
| d => d.expect(d.dut.io.outputValid, if((ii % 3) == 2) 1 else 0), | |||||
| d => if(d.peek(d.dut.io.outputValid) == 1) | |||||
| d.expect(d.dut.io.dataOut, 3) | |||||
| ) | |||||
| ) | |||||
| iotesters.Driver.execute(() => new daisyDot(3, 32), new TesterOptionsManager) { c => | |||||
| IoSpec[daisyDot](ins, c).myTester | |||||
| } should be(true) | |||||
| } | |||||
| it should "Be able to calculate dot products" in { | |||||
| def createProblem(vecLen: Int): List[CycleTask[daisyDot]] = { | |||||
| val in1 = List.fill(vecLen)(scala.util.Random.nextInt(10)) | |||||
| val in2 = List.fill(vecLen)(scala.util.Random.nextInt(10)) | |||||
| val dotProduct = (in1, in2).zipped.map(_*_).sum | |||||
| (in1, in2, (0 to vecLen)).zipped.map{ | |||||
| case(a, b, idx) => | |||||
| CycleTask[daisyDot]( | |||||
| idx, | |||||
| d => d.poke(d.dut.io.dataInA, a), | |||||
| d => d.poke(d.dut.io.dataInB, b), | |||||
| d => if(d.peek(d.dut.io.outputValid) == 1) | |||||
| d.expect(d.dut.io.dataOut, dotProduct) | |||||
| ) | |||||
| } | |||||
| } | |||||
| def createProblems(vecLen: Int): List[CycleTask[daisyDot]] = | |||||
| List.fill(10)(createProblem(vecLen)).zipWithIndex.map{ case(probs, idx) => | |||||
| probs.map(_.delay(3*idx)) | |||||
| }.flatten | |||||
| iotesters.Driver.execute(() => new daisyDot(3, 32), new TesterOptionsManager) { c => | |||||
| IoSpec[daisyDot](createProblems(3), c).myTester | |||||
| } should be(true) | |||||
| } | |||||
| } | |||||
| @@ -1,82 +0,0 @@ | |||||
| package Core | |||||
| import chisel3._ | |||||
| import chisel3.iotesters._ | |||||
| import org.scalatest.{Matchers, FlatSpec} | |||||
| import testUtils._ | |||||
| import utilz._ | |||||
| class daisyGridSpec extends FlatSpec with Matchers { | |||||
| behavior of "daisy grid" | |||||
| def writeRowCheck(dims: Dims, rowSel: Int => Int): Seq[CycleTask[daisyGrid]] = { | |||||
| (0 until dims.cols).map( n => | |||||
| CycleTask[daisyGrid]( | |||||
| n, | |||||
| d => d.poke(d.dut.io.dataIn, n), | |||||
| d => d.poke(d.dut.io.writeEnable, 1), | |||||
| d => d.poke(d.dut.io.rowSelect, rowSel(n))) | |||||
| ) ++ | |||||
| (0 until dims.cols*2).map( n => | |||||
| CycleTask[daisyGrid]( | |||||
| n, | |||||
| d => d.poke(d.dut.io.dataIn, 0), | |||||
| d => d.poke(d.dut.io.writeEnable, 0), | |||||
| d => d.poke(d.dut.io.rowSelect, rowSel(n)), | |||||
| d => d.expect(d.dut.io.dataOut, n % dims.cols)).delay(dims.cols) | |||||
| ) | |||||
| } | |||||
| val dims = Dims(rows = 4, cols = 5) | |||||
| it should "work like a regular daisyVec when row select is fixed to 0" in { | |||||
| iotesters.Driver.execute(() => new daisyGrid(dims, 32), new TesterOptionsManager) { c => | |||||
| IoSpec[daisyGrid](writeRowCheck(dims, _ => 0), c).myTester | |||||
| } should be(true) | |||||
| } | |||||
| it should "work like a regular daisyVec when row select is fixed to 1" in { | |||||
| iotesters.Driver.execute(() => new daisyGrid(dims, 32), new TesterOptionsManager) { c => | |||||
| IoSpec[daisyGrid](writeRowCheck(dims, _ => 1), c).myTester | |||||
| } should be(true) | |||||
| } | |||||
| it should "be able to write a matrix and output it" in { | |||||
| iotesters.Driver.execute(() => new daisyGrid(dims, 32), new TesterOptionsManager) { c => | |||||
| def writeMatrix(matrix: Matrix): List[CycleTask[daisyGrid]] = { | |||||
| (0 until dims.elements).toList.zipWithIndex.map{ case(n, idx) => | |||||
| val row = n / dims.cols | |||||
| CycleTask[daisyGrid]( | |||||
| n, | |||||
| d => d.poke(d.dut.io.dataIn, n), | |||||
| d => d.poke(d.dut.io.writeEnable, 1), | |||||
| d => d.poke(d.dut.io.rowSelect, row)) | |||||
| } | |||||
| } | |||||
| def readMatrix(matrix: Matrix): List[CycleTask[daisyGrid]] = { | |||||
| (0 until dims.elements).toList.zipWithIndex.map{ case(n, idx) => | |||||
| val row = n / dims.cols | |||||
| CycleTask[daisyGrid]( | |||||
| n, | |||||
| d => d.poke(d.dut.io.dataIn, 0), | |||||
| d => d.poke(d.dut.io.writeEnable, 0), | |||||
| d => d.poke(d.dut.io.rowSelect, row), | |||||
| d => d.expect(d.dut.io.dataOut, n)) | |||||
| } | |||||
| } | |||||
| val m = genMatrix(Dims(rows = 4, cols = 5)) | |||||
| val ins = writeMatrix(m) ++ readMatrix(m).map(_.delay(dims.elements)) | |||||
| IoSpec[daisyGrid](ins, c).myTester | |||||
| } should be(true) | |||||
| } | |||||
| } | |||||
| @@ -1,112 +0,0 @@ | |||||
| package Core | |||||
| import chisel3._ | |||||
| import chisel3.iotesters._ | |||||
| import org.scalatest.{Matchers, FlatSpec} | |||||
| import testUtils._ | |||||
| import utilz._ | |||||
| class daisyMatMulSpec extends FlatSpec with Matchers { | |||||
| def generateProblem(dims: Dims): List[CycleTask[daisyMultiplier]] = { | |||||
| val matrixA = genMatrix(dims) | |||||
| val matrixB = genMatrix(dims).transpose | |||||
| val answers = matrixMultiply(matrixA, matrixB) | |||||
| println("Multiplying matrix A") | |||||
| println(printMatrix(matrixA)) | |||||
| println("with matrix B") | |||||
| println(printMatrix(matrixB)) | |||||
| println("The input order of matrix B is") | |||||
| println(printMatrix(matrixB.transpose)) | |||||
| println("Expected output is") | |||||
| println(printMatrix(answers)) | |||||
| val matrixInputA = matrixA.flatten.zipWithIndex.map{ | |||||
| case(in, idx) => | |||||
| CycleTask[daisyMultiplier]( | |||||
| idx, | |||||
| d => d.poke(d.dut.io.dataInA, in), | |||||
| d => d.poke(d.dut.io.writeEnableA, 1) | |||||
| ) | |||||
| } | |||||
| val matrixInputB = matrixB.transpose.flatten.zipWithIndex.map{ | |||||
| case(in, idx) => | |||||
| CycleTask[daisyMultiplier]( | |||||
| idx, | |||||
| d => d.poke(d.dut.io.dataInB, in), | |||||
| d => d.poke(d.dut.io.writeEnableB, 1) | |||||
| ) | |||||
| } | |||||
| val disableInputs = List( | |||||
| CycleTask[daisyMultiplier]( | |||||
| dims.elements, | |||||
| d => d.poke(d.dut.io.writeEnableA, 0) | |||||
| ), | |||||
| CycleTask[daisyMultiplier]( | |||||
| dims.elements, | |||||
| d => d.poke(d.dut.io.writeEnableB, 0) | |||||
| ) | |||||
| ) | |||||
| val checkValid1 = (0 until dims.elements).map( n => | |||||
| CycleTask[daisyMultiplier]( | |||||
| n, | |||||
| d => d.expect(d.dut.io.dataValid, 0, "data valid should not be asserted before data is ready") | |||||
| ) | |||||
| ).toList | |||||
| val checkValid2 = (0 until dims.rows * dims.rows * dims.cols).map{ n => | |||||
| val shouldBeValid = (n % dims.cols) == dims.cols - 1 | |||||
| val answerRowIndex = n/(dims.rows*dims.cols) | |||||
| val answerColIndex = ((n-1)/(dims.cols)) % dims.rows | |||||
| val expectedOutput = answers(answerRowIndex)(answerColIndex) | |||||
| CycleTask[daisyMultiplier]( | |||||
| n, | |||||
| d => if(!shouldBeValid) | |||||
| d.expect(d.dut.io.dataValid, 0) | |||||
| else { | |||||
| d.expect(d.dut.io.dataValid, 1) | |||||
| d.expect(d.dut.io.dataOut, expectedOutput) | |||||
| } | |||||
| ).delay(dims.elements + 1) | |||||
| }.toList | |||||
| // adds a lot of annoying noise | |||||
| // val peekDebug = (0 until 20).map(n => | |||||
| // CycleTask[daisyMultiplier]( | |||||
| // n, | |||||
| // _ => println(s"at step $n"), | |||||
| // d => println(printModuleIO(d.peek(d.dut.io))), | |||||
| // _ => println(), | |||||
| // ) | |||||
| // ).toList | |||||
| matrixInputA ::: matrixInputB ::: disableInputs ::: checkValid1 ::: checkValid2 // ::: peekDebug | |||||
| } | |||||
| behavior of "mat multiplier" | |||||
| val dims = Dims(rows = 3, cols = 2) | |||||
| it should "work" in { | |||||
| iotesters.Driver.execute(() => new daisyMultiplier(dims, 32), new TesterOptionsManager) { c => | |||||
| IoSpec[daisyMultiplier](generateProblem(Dims(rows = 3, cols = 2)), c).myTester | |||||
| } should be(true) | |||||
| } | |||||
| } | |||||
| @@ -1,116 +0,0 @@ | |||||
| package Core | |||||
| import chisel3._ | |||||
| import chisel3.iotesters._ | |||||
| import org.scalatest.{Matchers, FlatSpec} | |||||
| import testUtils._ | |||||
| import utilz._ | |||||
| class daisyVecMatSpec extends FlatSpec with Matchers { | |||||
| def generateProblem(dims: Dims): List[CycleTask[daisyVecMat]] = { | |||||
| // for a vec len A, a matrix must have dims A rows | |||||
| val matrixB = genMatrix(dims).transpose | |||||
| val vecA = List.fill(dims.rows)(scala.util.Random.nextInt(5)) | |||||
| def answers: List[Int] = matrixB.map( col => | |||||
| (col, vecA).zipped.map(_*_).sum) | |||||
| println("multiplying vector: ") | |||||
| println(printVector(vecA)) | |||||
| println("with matrix:") | |||||
| println(printMatrix(matrixB.transpose)) | |||||
| println("which should equal") | |||||
| println(printVector(answers)) | |||||
| println("Input order of matrix:") | |||||
| println(printMatrix(matrixB)) | |||||
| val vecInput = vecA.zipWithIndex.map{ | |||||
| case(in, idx) => | |||||
| CycleTask[daisyVecMat]( | |||||
| idx, | |||||
| d => d.poke(d.dut.io.dataInA, in), | |||||
| d => d.poke(d.dut.io.writeEnableA, 1) | |||||
| ) | |||||
| } | |||||
| val matrixInput = matrixB.flatten.zipWithIndex.map{ | |||||
| case(in, idx) => | |||||
| CycleTask[daisyVecMat]( | |||||
| idx, | |||||
| d => d.poke(d.dut.io.dataInB, in), | |||||
| d => d.poke(d.dut.io.writeEnableB, 1) | |||||
| ) | |||||
| } | |||||
| val inputDisablers = List( | |||||
| CycleTask[daisyVecMat]( | |||||
| dims.rows, | |||||
| d => d.poke(d.dut.io.writeEnableA, 0) | |||||
| ), | |||||
| CycleTask[daisyVecMat]( | |||||
| dims.elements, | |||||
| d => d.poke(d.dut.io.writeEnableB, 0) | |||||
| ) | |||||
| ) | |||||
| val checkValid1 = (0 until dims.elements).map( n => | |||||
| CycleTask[daisyVecMat]( | |||||
| n, | |||||
| d => d.expect(d.dut.io.dataValid, 0, "data valid should not be asserted before data is ready") | |||||
| ) | |||||
| ).toList | |||||
| val checkValid2 = (0 until dims.elements).map{ n => | |||||
| val shouldBeValid = (n % dims.rows) == dims.rows - 1 | |||||
| val whichOutput = answers( (n/dims.rows) ) | |||||
| CycleTask[daisyVecMat]( | |||||
| n, | |||||
| d => if(!shouldBeValid) | |||||
| d.expect(d.dut.io.dataValid, 0) | |||||
| else { | |||||
| d.expect(d.dut.io.dataValid, 1) | |||||
| d.expect(d.dut.io.dataOut, whichOutput) | |||||
| } | |||||
| ).delay(dims.elements) | |||||
| }.toList | |||||
| // adds a lot of annoying noise | |||||
| // val peekDebug = (0 until 20).map(n => | |||||
| // CycleTask[daisyVecMat]( | |||||
| // n, | |||||
| // _ => println(s"at step $n"), | |||||
| // d => println(printModuleIO(d.peek(d.dut.io))), | |||||
| // _ => println(), | |||||
| // ) | |||||
| // ).toList | |||||
| vecInput ::: matrixInput ::: inputDisablers ::: checkValid1 ::: checkValid2 // ::: peekDebug | |||||
| } | |||||
| behavior of "vec mat multiplier" | |||||
| val dims = Dims(rows = 3, cols = 2) | |||||
| it should "work" in { | |||||
| iotesters.Driver.execute(() => new daisyVecMat(dims, 32), new TesterOptionsManager) { c => | |||||
| IoSpec[daisyVecMat](generateProblem(Dims(rows = 3, cols = 2)), c).myTester | |||||
| } should be(true) | |||||
| } | |||||
| } | |||||
| @@ -1,84 +0,0 @@ | |||||
| package Core | |||||
| import chisel3._ | |||||
| import chisel3.iotesters._ | |||||
| import org.scalatest.{Matchers, FlatSpec} | |||||
| import testUtils._ | |||||
| class daisyVecSpec extends FlatSpec with Matchers { | |||||
| behavior of "daisy vector" | |||||
| it should "not write when write enable is low" in { | |||||
| val ins = (0 to 10).map(ii => | |||||
| CycleTask[daisyVector]( | |||||
| ii, | |||||
| d => d.poke(d.dut.io.dataIn, 0), | |||||
| d => d.poke(d.dut.io.writeEnable, 0), | |||||
| d => d.expect(d.dut.io.dataOut, 0)) | |||||
| ).toList | |||||
| iotesters.Driver.execute(() => new daisyVector(4, 32), new TesterOptionsManager) { c => | |||||
| IoSpec[daisyVector](ins, c).myTester | |||||
| } should be(true) | |||||
| } | |||||
| it should "write only when write enable is asserted" in { | |||||
| val ins = | |||||
| (0 until 4).map(ii => | |||||
| CycleTask[daisyVector]( | |||||
| ii, | |||||
| _ => println("inputting 2s'"), | |||||
| d => d.poke(d.dut.io.dataIn, 2), | |||||
| d => d.poke(d.dut.io.writeEnable, 1))) ++ | |||||
| (0 until 6).map(ii => | |||||
| CycleTask[daisyVector]( | |||||
| ii + 4, | |||||
| _ => println("Checking output is 2"), | |||||
| d => d.poke(d.dut.io.writeEnable, 0), | |||||
| d => d.expect(d.dut.io.dataOut, 2) | |||||
| )) | |||||
| iotesters.Driver.execute(() => new daisyVector(4, 32), new TesterOptionsManager) { c => | |||||
| IoSpec[daisyVector](ins, c).myTester | |||||
| } should be(true) | |||||
| } | |||||
| it should "Work in general" in { | |||||
| val ins = { | |||||
| val inputs = List.fill(10)(scala.util.Random.nextInt(10000)) | |||||
| println(inputs) | |||||
| val in = inputs.zipWithIndex.map{ case(in,idx) => | |||||
| CycleTask[daisyVector]( | |||||
| idx, | |||||
| d => d.poke(d.dut.io.dataIn, in), | |||||
| d => d.poke(d.dut.io.writeEnable, 1) | |||||
| ) | |||||
| } | |||||
| val out = inputs.zipWithIndex.map{ case(expected, idx) => | |||||
| CycleTask[daisyVector]( | |||||
| idx + 4, | |||||
| d => d.expect(d.dut.io.dataOut, expected) | |||||
| ) | |||||
| } | |||||
| in ::: out | |||||
| } | |||||
| iotesters.Driver.execute(() => new daisyVector(4, 32), new TesterOptionsManager) { c => | |||||
| IoSpec[daisyVector](ins, c).myTester | |||||
| } should be(true) | |||||
| } | |||||
| } | |||||
| @@ -1,74 +0,0 @@ | |||||
| package Core | |||||
| import chisel3._ | |||||
| import chisel3.iotesters._ | |||||
| import org.scalatest.{Matchers, FlatSpec} | |||||
| object testUtils { | |||||
| /** | |||||
| Somewhat unintuitively named, a cycle task is a list test tasks at some time step. | |||||
| In order to not have to supply a list the scala varargs syntax (*) is used. | |||||
| As an example, at step 13 we want to input a value to a signal in: (PeekPokeTester[T] => Unit) | |||||
| and check an output out: ((PeekPokeTester[T] => Unit) with the possibility of test failure exception) | |||||
| Thanks to varargs syntax this would be | |||||
| CycleTask[MyModule](13, in, out) | |||||
| Sometimes it is convenient to delay a bunch of checks by some set amount of cycles. | |||||
| For instance, assume a component needs 10 cycles to set up, but it's more convenient | |||||
| to write tests from T = 0, we do that and then call .delay(10) to ensure the T0 for the | |||||
| tasks is actually T = 10 | |||||
| */ | |||||
| case class CycleTask[T <: Module](step: Int, run: PeekPokeTester[T] => Unit*){ | |||||
| // :_* is necessary for calling var args with explicit list | |||||
| def delay(by: Int) = CycleTask[T](step + by, run:_*) | |||||
| } | |||||
| /** | |||||
| Takes in a list of cycle tasks, sorts them by timestep to execute and runs until all cycletasks are done | |||||
| */ | |||||
| case class IoSpec[T <: Module]( | |||||
| instructions: Seq[CycleTask[T]], | |||||
| component: T | |||||
| ){ | |||||
| val lastStep = instructions.maxBy(_.step).step | |||||
| val instructionsMap = instructions.groupBy(_.step) | |||||
| class tester(c: T) extends PeekPokeTester(c) | |||||
| val myTester: PeekPokeTester[T] = new tester(component) { | |||||
| for(ii <- 0 to lastStep){ | |||||
| instructionsMap.getOrElse(ii, Nil).foreach(_.run.foreach(t => t(this))) | |||||
| step(1) | |||||
| } | |||||
| } | |||||
| } | |||||
| } | |||||
| class testUtilSpec extends FlatSpec with Matchers { | |||||
| import testUtils._ | |||||
| val ins = List[CycleTask[daisyVector]]( | |||||
| CycleTask( | |||||
| 1, | |||||
| d => d.poke(d.dut.io.dataIn, 1), | |||||
| d => d.expect(d.dut.io.dataOut, 0, s"fail at step ${d.t}") | |||||
| ) | |||||
| ) | |||||
| behavior of "my simple test harness attempt" | |||||
| it should "not NPE" in { | |||||
| iotesters.Driver.execute(() => new daisyVector(4, 32), new TesterOptionsManager) { c => | |||||
| val myTest = IoSpec[daisyVector](ins, c) | |||||
| myTest.myTester | |||||
| } should be(true) | |||||
| } | |||||
| } | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,52 +0,0 @@ | |||||
| def scalacOptionsVersion(scalaVersion: String): Seq[String] = { | |||||
| Seq() ++ { | |||||
| // If we're building with Scala > 2.11, enable the compile option | |||||
| // switch to support our anonymous Bundle definitions: | |||||
| // https://github.com/scala/bug/issues/10047 | |||||
| CrossVersion.partialVersion(scalaVersion) match { | |||||
| case Some((2, scalaMajor: Long)) if scalaMajor < 12 => Seq() | |||||
| case _ => Seq("-Xsource:2.11") | |||||
| } | |||||
| } | |||||
| } | |||||
| def javacOptionsVersion(scalaVersion: String): Seq[String] = { | |||||
| Seq() ++ { | |||||
| // Scala 2.12 requires Java 8. We continue to generate | |||||
| // Java 7 compatible code for Scala 2.11 | |||||
| // for compatibility with old clients. | |||||
| CrossVersion.partialVersion(scalaVersion) match { | |||||
| case Some((2, scalaMajor: Long)) if scalaMajor < 12 => | |||||
| Seq("-source", "1.7", "-target", "1.7") | |||||
| case _ => | |||||
| Seq("-source", "1.8", "-target", "1.8") | |||||
| } | |||||
| } | |||||
| } | |||||
| name := "chisel-module-template" | |||||
| version := "3.1.0" | |||||
| scalaVersion := "2.12.4" | |||||
| crossScalaVersions := Seq("2.11.12", "2.12.4") | |||||
| resolvers ++= Seq( | |||||
| Resolver.sonatypeRepo("snapshots"), | |||||
| Resolver.sonatypeRepo("releases") | |||||
| ) | |||||
| // Provide a managed dependency on X if -DXVersion="" is supplied on the command line. | |||||
| val defaultVersions = Map( | |||||
| "chisel3" -> "3.1.+", | |||||
| "chisel-iotesters" -> "1.2.+" | |||||
| ) | |||||
| libraryDependencies ++= (Seq("chisel3","chisel-iotesters").map { | |||||
| dep: String => "edu.berkeley.cs" %% dep % sys.props.getOrElse(dep + "Version", defaultVersions(dep)) }) | |||||
| scalacOptions ++= scalacOptionsVersion(scalaVersion.value) | |||||
| scalacOptions ++= Seq("-language:reflectiveCalls") | |||||
| javacOptions ++= javacOptionsVersion(scalaVersion.value) | |||||
| @@ -1 +0,0 @@ | |||||
| sbt.version = 1.1.0 | |||||
| @@ -1,256 +0,0 @@ | |||||
| package Ov1 | |||||
| import chisel3._ | |||||
| import chisel3.util._ | |||||
| import chisel3.core.Input | |||||
| import chisel3.iotesters.PeekPokeTester | |||||
| object Instructions { | |||||
| def BEQ = BitPat("b?????????????????000?????1100011") | |||||
| def BNE = BitPat("b?????????????????001?????1100011") | |||||
| def BLT = BitPat("b?????????????????100?????1100011") | |||||
| def BGE = BitPat("b?????????????????101?????1100011") | |||||
| def BLTU = BitPat("b?????????????????110?????1100011") | |||||
| def BGEU = BitPat("b?????????????????111?????1100011") | |||||
| def JALR = BitPat("b?????????????????000?????1100111") | |||||
| def JAL = BitPat("b?????????????????????????1101111") | |||||
| def LUI = BitPat("b?????????????????????????0110111") | |||||
| def AUIPC = BitPat("b?????????????????????????0010111") | |||||
| def ADDI = BitPat("b?????????????????000?????0010011") | |||||
| def SLLI = BitPat("b000000???????????001?????0010011") | |||||
| def SLTI = BitPat("b?????????????????010?????0010011") | |||||
| def SLTIU = BitPat("b?????????????????011?????0010011") | |||||
| def XORI = BitPat("b?????????????????100?????0010011") | |||||
| def SRLI = BitPat("b000000???????????101?????0010011") | |||||
| def SRAI = BitPat("b010000???????????101?????0010011") | |||||
| def ORI = BitPat("b?????????????????110?????0010011") | |||||
| def ANDI = BitPat("b?????????????????111?????0010011") | |||||
| def ADD = BitPat("b0000000??????????000?????0110011") | |||||
| def SUB = BitPat("b0100000??????????000?????0110011") | |||||
| def SLL = BitPat("b0000000??????????001?????0110011") | |||||
| def SLT = BitPat("b0000000??????????010?????0110011") | |||||
| def SLTU = BitPat("b0000000??????????011?????0110011") | |||||
| def XOR = BitPat("b0000000??????????100?????0110011") | |||||
| def SRL = BitPat("b0000000??????????101?????0110011") | |||||
| def SRA = BitPat("b0100000??????????101?????0110011") | |||||
| def OR = BitPat("b0000000??????????110?????0110011") | |||||
| def AND = BitPat("b0000000??????????111?????0110011") | |||||
| def ADDIW = BitPat("b?????????????????000?????0011011") | |||||
| def SLLIW = BitPat("b0000000??????????001?????0011011") | |||||
| def SRLIW = BitPat("b0000000??????????101?????0011011") | |||||
| def SRAIW = BitPat("b0100000??????????101?????0011011") | |||||
| def ADDW = BitPat("b0000000??????????000?????0111011") | |||||
| def SUBW = BitPat("b0100000??????????000?????0111011") | |||||
| def SLLW = BitPat("b0000000??????????001?????0111011") | |||||
| def SRLW = BitPat("b0000000??????????101?????0111011") | |||||
| def SRAW = BitPat("b0100000??????????101?????0111011") | |||||
| def LB = BitPat("b?????????????????000?????0000011") | |||||
| def LH = BitPat("b?????????????????001?????0000011") | |||||
| def LW = BitPat("b?????????????????010?????0000011") | |||||
| def LD = BitPat("b?????????????????011?????0000011") | |||||
| def LBU = BitPat("b?????????????????100?????0000011") | |||||
| def LHU = BitPat("b?????????????????101?????0000011") | |||||
| def LWU = BitPat("b?????????????????110?????0000011") | |||||
| def SB = BitPat("b?????????????????000?????0100011") | |||||
| def SH = BitPat("b?????????????????001?????0100011") | |||||
| def SW = BitPat("b?????????????????010?????0100011") | |||||
| def SD = BitPat("b?????????????????011?????0100011") | |||||
| def FENCE = BitPat("b?????????????????000?????0001111") | |||||
| def FENCE_I = BitPat("b?????????????????001?????0001111") | |||||
| def MUL = BitPat("b0000001??????????000?????0110011") | |||||
| def MULH = BitPat("b0000001??????????001?????0110011") | |||||
| def MULHSU = BitPat("b0000001??????????010?????0110011") | |||||
| def MULHU = BitPat("b0000001??????????011?????0110011") | |||||
| def DIV = BitPat("b0000001??????????100?????0110011") | |||||
| def DIVU = BitPat("b0000001??????????101?????0110011") | |||||
| def REM = BitPat("b0000001??????????110?????0110011") | |||||
| def REMU = BitPat("b0000001??????????111?????0110011") | |||||
| def MULW = BitPat("b0000001??????????000?????0111011") | |||||
| def DIVW = BitPat("b0000001??????????100?????0111011") | |||||
| def DIVUW = BitPat("b0000001??????????101?????0111011") | |||||
| def REMW = BitPat("b0000001??????????110?????0111011") | |||||
| def REMUW = BitPat("b0000001??????????111?????0111011") | |||||
| def LR_W = BitPat("b00010??00000?????010?????0101111") | |||||
| def SC_W = BitPat("b00011????????????010?????0101111") | |||||
| def LR_D = BitPat("b00010??00000?????011?????0101111") | |||||
| def SC_D = BitPat("b00011????????????011?????0101111") | |||||
| def ECALL = BitPat("b00000000000000000000000001110011") | |||||
| def EBREAK = BitPat("b00000000000100000000000001110011") | |||||
| def URET = BitPat("b00000000001000000000000001110011") | |||||
| def MRET = BitPat("b00110000001000000000000001110011") | |||||
| def DRET = BitPat("b01111011001000000000000001110011") | |||||
| def SFENCE_VMA = BitPat("b0001001??????????000000001110011") | |||||
| def WFI = BitPat("b00010000010100000000000001110011") | |||||
| def CSRRW = BitPat("b?????????????????001?????1110011") | |||||
| def CSRRS = BitPat("b?????????????????010?????1110011") | |||||
| def CSRRC = BitPat("b?????????????????011?????1110011") | |||||
| def CSRRWI = BitPat("b?????????????????101?????1110011") | |||||
| def CSRRSI = BitPat("b?????????????????110?????1110011") | |||||
| def CSRRCI = BitPat("b?????????????????111?????1110011") | |||||
| def CUSTOM0 = BitPat("b?????????????????000?????0001011") | |||||
| def CUSTOM0_RS1 = BitPat("b?????????????????010?????0001011") | |||||
| def CUSTOM0_RS1_RS2 = BitPat("b?????????????????011?????0001011") | |||||
| def CUSTOM0_RD = BitPat("b?????????????????100?????0001011") | |||||
| def CUSTOM0_RD_RS1 = BitPat("b?????????????????110?????0001011") | |||||
| def CUSTOM0_RD_RS1_RS2 = BitPat("b?????????????????111?????0001011") | |||||
| def CUSTOM1 = BitPat("b?????????????????000?????0101011") | |||||
| def CUSTOM1_RS1 = BitPat("b?????????????????010?????0101011") | |||||
| def CUSTOM1_RS1_RS2 = BitPat("b?????????????????011?????0101011") | |||||
| def CUSTOM1_RD = BitPat("b?????????????????100?????0101011") | |||||
| def CUSTOM1_RD_RS1 = BitPat("b?????????????????110?????0101011") | |||||
| def CUSTOM1_RD_RS1_RS2 = BitPat("b?????????????????111?????0101011") | |||||
| def CUSTOM2 = BitPat("b?????????????????000?????1011011") | |||||
| def CUSTOM2_RS1 = BitPat("b?????????????????010?????1011011") | |||||
| def CUSTOM2_RS1_RS2 = BitPat("b?????????????????011?????1011011") | |||||
| def CUSTOM2_RD = BitPat("b?????????????????100?????1011011") | |||||
| def CUSTOM2_RD_RS1 = BitPat("b?????????????????110?????1011011") | |||||
| def CUSTOM2_RD_RS1_RS2 = BitPat("b?????????????????111?????1011011") | |||||
| def CUSTOM3 = BitPat("b?????????????????000?????1111011") | |||||
| def CUSTOM3_RS1 = BitPat("b?????????????????010?????1111011") | |||||
| def CUSTOM3_RS1_RS2 = BitPat("b?????????????????011?????1111011") | |||||
| def CUSTOM3_RD = BitPat("b?????????????????100?????1111011") | |||||
| def CUSTOM3_RD_RS1 = BitPat("b?????????????????110?????1111011") | |||||
| def CUSTOM3_RD_RS1_RS2 = BitPat("b?????????????????111?????1111011") | |||||
| def SLLI_RV32 = BitPat("b0000000??????????001?????0010011") | |||||
| def SRLI_RV32 = BitPat("b0000000??????????101?????0010011") | |||||
| def SRAI_RV32 = BitPat("b0100000??????????101?????0010011") | |||||
| def RDCYCLE = BitPat("b11000000000000000010?????1110011") | |||||
| def RDTIME = BitPat("b11000000000100000010?????1110011") | |||||
| def RDINSTRET = BitPat("b11000000001000000010?????1110011") | |||||
| def RDCYCLEH = BitPat("b11001000000000000010?????1110011") | |||||
| def RDTIMEH = BitPat("b11001000000100000010?????1110011") | |||||
| def RDINSTRETH = BitPat("b11001000001000000010?????1110011") | |||||
| } | |||||
| object ScalarOpConstants | |||||
| { | |||||
| //************************************ | |||||
| // Control Signals | |||||
| val Y = true.B | |||||
| val N = false.B | |||||
| // PC Select Signal | |||||
| val PC_4 = 0.asUInt(3.W) // PC + 4 | |||||
| val PC_BR = 1.asUInt(3.W) // branch_target | |||||
| val PC_J = 2.asUInt(3.W) // jump_target | |||||
| val PC_JR = 3.asUInt(3.W) // jump_reg_target | |||||
| val PC_EXC = 4.asUInt(3.W) // exception | |||||
| // Branch Type | |||||
| val BR_N = 0.asUInt(4.W) // Next | |||||
| val BR_NE = 1.asUInt(4.W) // Branch on NotEqual | |||||
| val BR_EQ = 2.asUInt(4.W) // Branch on Equal | |||||
| val BR_GE = 3.asUInt(4.W) // Branch on Greater/Equal | |||||
| val BR_GEU = 4.asUInt(4.W) // Branch on Greater/Equal Unsigned | |||||
| val BR_LT = 5.asUInt(4.W) // Branch on Less Than | |||||
| val BR_LTU = 6.asUInt(4.W) // Branch on Less Than Unsigned | |||||
| val BR_J = 7.asUInt(4.W) // Jump | |||||
| val BR_JR = 8.asUInt(4.W) // Jump Register | |||||
| // RS1 Operand Select Signal | |||||
| val OP1_RS1 = 0.asUInt(2.W) // Register Source #1 | |||||
| val OP1_IMU = 1.asUInt(2.W) // immediate, U-type | |||||
| val OP1_IMZ = 2.asUInt(2.W) // Zero-extended rs1 field of inst, for CSRI instructions | |||||
| val OP1_X = 0.asUInt(2.W) | |||||
| // RS2 Operand Select Signal | |||||
| val OP2_RS2 = 0.asUInt(2.W) // Register Source #2 | |||||
| val OP2_IMI = 1.asUInt(2.W) // immediate, I-type | |||||
| val OP2_IMS = 2.asUInt(2.W) // immediate, S-type | |||||
| val OP2_PC = 3.asUInt(2.W) // PC | |||||
| val OP2_X = 0.asUInt(2.W) | |||||
| // Register File Write Enable Signal | |||||
| val REN_0 = false.B | |||||
| val REN_1 = true.B | |||||
| val REN_X = false.B | |||||
| // ALU Operation Signal | |||||
| val ALU_ADD = 1.asUInt(4.W) | |||||
| val ALU_SUB = 2.asUInt(4.W) | |||||
| val ALU_SLL = 3.asUInt(4.W) | |||||
| val ALU_SRL = 4.asUInt(4.W) | |||||
| val ALU_SRA = 5.asUInt(4.W) | |||||
| val ALU_AND = 6.asUInt(4.W) | |||||
| val ALU_OR = 7.asUInt(4.W) | |||||
| val ALU_XOR = 8.asUInt(4.W) | |||||
| val ALU_SLT = 9.asUInt(4.W) | |||||
| val ALU_SLTU= 10.asUInt(4.W) | |||||
| val ALU_COPY1= 11.asUInt(4.W) | |||||
| val ALU_X = 0.asUInt(4.W) | |||||
| // Writeback Select Signal | |||||
| val WB_ALU = 0.asUInt(2.W) | |||||
| val WB_MEM = 1.asUInt(2.W) | |||||
| val WB_PC4 = 2.asUInt(2.W) | |||||
| val WB_CSR = 3.asUInt(2.W) | |||||
| val WB_X = 0.asUInt(2.W) | |||||
| // Memory Function Type (Read,Write,Fence) Signal | |||||
| val MWR_R = 0.asUInt(2.W) | |||||
| val MWR_W = 1.asUInt(2.W) | |||||
| val MWR_F = 2.asUInt(2.W) | |||||
| val MWR_X = 0.asUInt(2.W) | |||||
| // Memory Enable Signal | |||||
| val MEN_0 = Bool(false) | |||||
| val MEN_1 = Bool(true) | |||||
| val MEN_X = Bool(false) | |||||
| // Memory Mask Type Signal | |||||
| val MSK_B = 0.asUInt(3.W) | |||||
| val MSK_BU = 1.asUInt(3.W) | |||||
| val MSK_H = 2.asUInt(3.W) | |||||
| val MSK_HU = 3.asUInt(3.W) | |||||
| val MSK_W = 4.asUInt(3.W) | |||||
| val MSK_X = 4.asUInt(3.W) | |||||
| // Cache Flushes & Sync Primitives | |||||
| val M_N = 0.asUInt(3.W) | |||||
| val M_SI = 1.asUInt(3.W) // synch instruction stream | |||||
| val M_SD = 2.asUInt(3.W) // synch data stream | |||||
| val M_FA = 3.asUInt(3.W) // flush all caches | |||||
| val M_FD = 4.asUInt(3.W) // flush data cache | |||||
| // Memory Functions (read, write, fence) | |||||
| val MT_READ = 0.asUInt(2.W) | |||||
| val MT_WRITE = 1.asUInt(2.W) | |||||
| val MT_FENCE = 2.asUInt(2.W) | |||||
| } | |||||
| object MemoryOpConstants | |||||
| { | |||||
| val MT_X = 0.asUInt(3.W) | |||||
| val MT_B = 1.asUInt(3.W) | |||||
| val MT_H = 2.asUInt(3.W) | |||||
| val MT_W = 3.asUInt(3.W) | |||||
| val MT_D = 4.asUInt(3.W) | |||||
| val MT_BU = 5.asUInt(3.W) | |||||
| val MT_HU = 6.asUInt(3.W) | |||||
| val MT_WU = 7.asUInt(3.W) | |||||
| val M_X = "b0".asUInt(1.W) | |||||
| val M_XRD = "b0".asUInt(1.W) // int load | |||||
| val M_XWR = "b1".asUInt(1.W) // int store | |||||
| val DPORT = 0 | |||||
| val IPORT = 1 | |||||
| } | |||||
| object CSR | |||||
| { | |||||
| // commands | |||||
| val SZ = 3.W | |||||
| val X = 0.asUInt(SZ) | |||||
| val Nc = 0.asUInt(SZ) | |||||
| val W = 1.asUInt(SZ) | |||||
| val S = 2.asUInt(SZ) | |||||
| val C = 3.asUInt(SZ) | |||||
| val I = 4.asUInt(SZ) | |||||
| val R = 5.asUInt(SZ) | |||||
| } | |||||
| @@ -1,31 +0,0 @@ | |||||
| package Ov1 | |||||
| import chisel3._ | |||||
| import chisel3.core.Input | |||||
| import chisel3.iotesters.PeekPokeTester | |||||
| /** | |||||
| Decoder should read the top 6 bits and output | |||||
| Branch | |||||
| MemRead | |||||
| MemtoReg | |||||
| ALUOp | |||||
| memWrite | |||||
| ALUSrc | |||||
| RegWrite | |||||
| */ | |||||
| class ControlSignals extends Bundle(){ | |||||
| val Branch = Output(Bool()) | |||||
| val MemRead = Output(Bool()) | |||||
| val MemtoReg = Output(Bool()) | |||||
| val MemWrite = Output(Bool()) | |||||
| val ALUSrc = Output(Bool()) | |||||
| val RegWrite = Output(Bool()) | |||||
| } | |||||
| // class myDecoder(val hurr: Int) extends Module { | |||||
| // } | |||||
| @@ -1,13 +0,0 @@ | |||||
| package Ov1 | |||||
| import chisel3._ | |||||
| import chisel3.core.Input | |||||
| import chisel3.iotesters.PeekPokeTester | |||||
| object Defs { | |||||
| class RType extends Bundle { | |||||
| } | |||||
| } | |||||
| @@ -1,122 +0,0 @@ | |||||
| package Ov1 | |||||
| import chisel3._ | |||||
| import chisel3.util._ | |||||
| import chisel3.core.Input | |||||
| import chisel3.iotesters.PeekPokeTester | |||||
| object CoreMain { | |||||
| def main(args: Array[String]): Unit = { | |||||
| iotesters.Driver.execute(args, () => new Tile()) { | |||||
| c => new TileTest(c) | |||||
| } | |||||
| } | |||||
| } | |||||
| class Tile() extends Module{ | |||||
| val io = IO( | |||||
| new Bundle { | |||||
| val instruction = Input(UInt(32.W)) | |||||
| val opcode = Output(UInt(7.W)) | |||||
| val immediate = Output(UInt(12.W)) | |||||
| }) | |||||
| class Itype extends Bundle { | |||||
| val opcode = UInt(7.W) | |||||
| val rd = UInt(5.W) | |||||
| val funct3 = UInt(3.W) | |||||
| val rs1 = UInt(5.W) | |||||
| val immediate = UInt(12.W) | |||||
| } | |||||
| import Instructions._ | |||||
| import ScalarOpConstants._ | |||||
| import MemoryOpConstants._ | |||||
| import CSR._ | |||||
| val memes = io.instruction.asTypeOf(new Itype) | |||||
| io.opcode := memes.opcode | |||||
| io.immediate := memes.immediate | |||||
| val defaultSignals = List(N, BR_N , OP1_X , OP2_X , ALU_X , WB_X , REN_0, MEN_0, M_X , MT_X, CSR.Nc) | |||||
| val ControlSignals = ListLookup(io.instruction, | |||||
| defaultSignals, | |||||
| Array( /* val | BR | op1 | op2 | ALU | wb | rf | mem | mem | mask | csr */ | |||||
| /* inst | type | sel | sel | fcn | sel | wen | en | wr | type | cmd */ | |||||
| LW -> List(Y, BR_N , OP1_RS1, OP2_IMI , ALU_ADD , WB_MEM, REN_1, MEN_1, M_XRD, MT_W, CSR.Nc), | |||||
| LB -> List(Y, BR_N , OP1_RS1, OP2_IMI , ALU_ADD , WB_MEM, REN_1, MEN_1, M_XRD, MT_B, CSR.Nc), | |||||
| LBU -> List(Y, BR_N , OP1_RS1, OP2_IMI , ALU_ADD , WB_MEM, REN_1, MEN_1, M_XRD, MT_BU, CSR.Nc), | |||||
| LH -> List(Y, BR_N , OP1_RS1, OP2_IMI , ALU_ADD , WB_MEM, REN_1, MEN_1, M_XRD, MT_H, CSR.Nc), | |||||
| LHU -> List(Y, BR_N , OP1_RS1, OP2_IMI , ALU_ADD , WB_MEM, REN_1, MEN_1, M_XRD, MT_HU, CSR.Nc), | |||||
| SW -> List(Y, BR_N , OP1_RS1, OP2_IMS , ALU_ADD , WB_X , REN_0, MEN_1, M_XWR, MT_W, CSR.Nc), | |||||
| SB -> List(Y, BR_N , OP1_RS1, OP2_IMS , ALU_ADD , WB_X , REN_0, MEN_1, M_XWR, MT_B, CSR.Nc), | |||||
| SH -> List(Y, BR_N , OP1_RS1, OP2_IMS , ALU_ADD , WB_X , REN_0, MEN_1, M_XWR, MT_H, CSR.Nc), | |||||
| AUIPC -> List(Y, BR_N , OP1_IMU, OP2_PC , ALU_ADD , WB_ALU, REN_1, MEN_0, M_X , MT_X, CSR.Nc), | |||||
| LUI -> List(Y, BR_N , OP1_IMU, OP2_X , ALU_COPY1, WB_ALU, REN_1, MEN_0, M_X , MT_X, CSR.Nc), | |||||
| ADDI -> List(Y, BR_N , OP1_RS1, OP2_IMI , ALU_ADD , WB_ALU, REN_1, MEN_0, M_X , MT_X, CSR.Nc), | |||||
| ANDI -> List(Y, BR_N , OP1_RS1, OP2_IMI , ALU_AND , WB_ALU, REN_1, MEN_0, M_X , MT_X, CSR.Nc), | |||||
| ORI -> List(Y, BR_N , OP1_RS1, OP2_IMI , ALU_OR , WB_ALU, REN_1, MEN_0, M_X , MT_X, CSR.Nc), | |||||
| XORI -> List(Y, BR_N , OP1_RS1, OP2_IMI , ALU_XOR , WB_ALU, REN_1, MEN_0, M_X , MT_X, CSR.Nc), | |||||
| SLTI -> List(Y, BR_N , OP1_RS1, OP2_IMI , ALU_SLT , WB_ALU, REN_1, MEN_0, M_X , MT_X, CSR.Nc), | |||||
| SLTIU -> List(Y, BR_N , OP1_RS1, OP2_IMI , ALU_SLTU, WB_ALU, REN_1, MEN_0, M_X , MT_X, CSR.Nc), | |||||
| SLLI -> List(Y, BR_N , OP1_RS1, OP2_IMI , ALU_SLL , WB_ALU, REN_1, MEN_0, M_X , MT_X, CSR.Nc), | |||||
| SRAI -> List(Y, BR_N , OP1_RS1, OP2_IMI , ALU_SRA , WB_ALU, REN_1, MEN_0, M_X , MT_X, CSR.Nc), | |||||
| SRLI -> List(Y, BR_N , OP1_RS1, OP2_IMI , ALU_SRL , WB_ALU, REN_1, MEN_0, M_X , MT_X, CSR.Nc), | |||||
| SLL -> List(Y, BR_N , OP1_RS1, OP2_RS2 , ALU_SLL , WB_ALU, REN_1, MEN_0, M_X , MT_X, CSR.Nc), | |||||
| ADD -> List(Y, BR_N , OP1_RS1, OP2_RS2 , ALU_ADD , WB_ALU, REN_1, MEN_0, M_X , MT_X, CSR.Nc), | |||||
| SUB -> List(Y, BR_N , OP1_RS1, OP2_RS2 , ALU_SUB , WB_ALU, REN_1, MEN_0, M_X , MT_X, CSR.Nc), | |||||
| SLT -> List(Y, BR_N , OP1_RS1, OP2_RS2 , ALU_SLT , WB_ALU, REN_1, MEN_0, M_X , MT_X, CSR.Nc), | |||||
| SLTU -> List(Y, BR_N , OP1_RS1, OP2_RS2 , ALU_SLTU, WB_ALU, REN_1, MEN_0, M_X , MT_X, CSR.Nc), | |||||
| AND -> List(Y, BR_N , OP1_RS1, OP2_RS2 , ALU_AND , WB_ALU, REN_1, MEN_0, M_X , MT_X, CSR.Nc), | |||||
| OR -> List(Y, BR_N , OP1_RS1, OP2_RS2 , ALU_OR , WB_ALU, REN_1, MEN_0, M_X , MT_X, CSR.Nc), | |||||
| XOR -> List(Y, BR_N , OP1_RS1, OP2_RS2 , ALU_XOR , WB_ALU, REN_1, MEN_0, M_X , MT_X, CSR.Nc), | |||||
| SRA -> List(Y, BR_N , OP1_RS1, OP2_RS2 , ALU_SRA , WB_ALU, REN_1, MEN_0, M_X , MT_X, CSR.Nc), | |||||
| SRL -> List(Y, BR_N , OP1_RS1, OP2_RS2 , ALU_SRL , WB_ALU, REN_1, MEN_0, M_X , MT_X, CSR.Nc), | |||||
| JAL -> List(Y, BR_J , OP1_X , OP2_X , ALU_X , WB_PC4, REN_1, MEN_0, M_X , MT_X, CSR.Nc), | |||||
| JALR -> List(Y, BR_JR , OP1_RS1, OP2_IMI , ALU_X , WB_PC4, REN_1, MEN_0, M_X , MT_X, CSR.Nc), | |||||
| BEQ -> List(Y, BR_EQ , OP1_X , OP2_X , ALU_X , WB_X , REN_0, MEN_0, M_X , MT_X, CSR.Nc), | |||||
| BNE -> List(Y, BR_NE , OP1_X , OP2_X , ALU_X , WB_X , REN_0, MEN_0, M_X , MT_X, CSR.Nc), | |||||
| BGE -> List(Y, BR_GE , OP1_X , OP2_X , ALU_X , WB_X , REN_0, MEN_0, M_X , MT_X, CSR.Nc), | |||||
| BGEU -> List(Y, BR_GEU, OP1_X , OP2_X , ALU_X , WB_X , REN_0, MEN_0, M_X , MT_X, CSR.Nc), | |||||
| BLT -> List(Y, BR_LT , OP1_X , OP2_X , ALU_X , WB_X , REN_0, MEN_0, M_X , MT_X, CSR.Nc), | |||||
| BLTU -> List(Y, BR_LTU, OP1_X , OP2_X , ALU_X , WB_X , REN_0, MEN_0, M_X , MT_X, CSR.Nc), | |||||
| CSRRWI -> List(Y, BR_N , OP1_IMZ, OP2_X , ALU_COPY1, WB_CSR, REN_1, MEN_0, M_X , MT_X, CSR.W), | |||||
| CSRRSI -> List(Y, BR_N , OP1_IMZ, OP2_X , ALU_COPY1, WB_CSR, REN_1, MEN_0, M_X , MT_X, CSR.S), | |||||
| CSRRCI -> List(Y, BR_N , OP1_IMZ, OP2_X , ALU_COPY1, WB_CSR, REN_1, MEN_0, M_X , MT_X, CSR.C), | |||||
| CSRRW -> List(Y, BR_N , OP1_RS1, OP2_X , ALU_COPY1, WB_CSR, REN_1, MEN_0, M_X , MT_X, CSR.W), | |||||
| CSRRS -> List(Y, BR_N , OP1_RS1, OP2_X , ALU_COPY1, WB_CSR, REN_1, MEN_0, M_X , MT_X, CSR.S), | |||||
| CSRRC -> List(Y, BR_N , OP1_RS1, OP2_X , ALU_COPY1, WB_CSR, REN_1, MEN_0, M_X , MT_X, CSR.C), | |||||
| ECALL -> List(Y, BR_N , OP1_X , OP2_X , ALU_X , WB_X , REN_0, MEN_0, M_X , MT_X, CSR.I), | |||||
| MRET -> List(Y, BR_N , OP1_X , OP2_X , ALU_X , WB_X , REN_0, MEN_0, M_X , MT_X, CSR.I), | |||||
| DRET -> List(Y, BR_N , OP1_X , OP2_X , ALU_X , WB_X , REN_0, MEN_0, M_X , MT_X, CSR.I), | |||||
| EBREAK -> List(Y, BR_N , OP1_X , OP2_X , ALU_X , WB_X , REN_0, MEN_0, M_X , MT_X, CSR.I), | |||||
| WFI -> List(Y, BR_N , OP1_X , OP2_X , ALU_X , WB_X , REN_0, MEN_0, M_X , MT_X, CSR.Nc), // implemented as a NOP | |||||
| FENCE_I -> List(Y, BR_N , OP1_X , OP2_X , ALU_X , WB_X , REN_0, MEN_0, M_X , MT_X, CSR.Nc), | |||||
| FENCE -> List(Y, BR_N , OP1_X , OP2_X , ALU_X , WB_X , REN_0, MEN_1, M_X , MT_X, CSR.Nc) | |||||
| // we are already sequentially consistent, so no need to honor the fence instruction | |||||
| )) | |||||
| } | |||||
| class TileTest(c: Tile) extends PeekPokeTester(c) { | |||||
| println("yo") | |||||
| step(1) | |||||
| poke(c.io.instruction, 0xAABB) | |||||
| val hurr = peek(c.io.opcode) | |||||
| val durr = peek(c.io.immediate) | |||||
| println(hurr.toString) | |||||
| println(durr.toString) | |||||
| step(1) | |||||
| } | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -1,3 +0,0 @@ | |||||
| [ | |||||
| ] | |||||
| @@ -0,0 +1,40 @@ | |||||
| import sbt._ | |||||
| object Dependencies { | |||||
| val fs2Version = "1.0.0" | |||||
| val catsVersion = "1.4.0" | |||||
| val catsEffectVersion = "1.0.0" | |||||
| // Dependencies for JVM part of code | |||||
| val backendDeps = Def.setting( | |||||
| Seq( | |||||
| "com.lihaoyi" %% "sourcecode" % "0.1.4", // expert println debugging | |||||
| "com.lihaoyi" %% "pprint" % "0.5.3", // pretty print for types and case classes | |||||
| "org.typelevel" %% "cats-core" % catsVersion, // abstract category dork stuff | |||||
| "com.chuusai" %% "shapeless" % "2.3.2", // Abstract level category dork stuff | |||||
| "joda-time" % "joda-time" % "2.9.9", | |||||
| "org.joda" % "joda-convert" % "2.0.1", | |||||
| "org.typelevel" %% "cats-effect" % catsEffectVersion, // IO monad category wank | |||||
| "co.fs2" %% "fs2-core" % fs2Version, // The best library | |||||
| "co.fs2" %% "fs2-io" % fs2Version, // The best library | |||||
| "com.beachape" %% "enumeratum" % "1.5.13", | |||||
| "com.github.nscala-time" %% "nscala-time" % "2.16.0", // Time | |||||
| "org.tpolecat" %% "atto-core" % "0.6.3", | |||||
| "org.tpolecat" %% "atto-refined" % "0.6.3", | |||||
| "org.typelevel" %% "spire" % "0.14.1", | |||||
| "io.estatico" %% "newtype" % "0.4.2", | |||||
| "com.github.pathikrit" %% "better-files" % "3.7.0", | |||||
| "org.atnos" %% "eff" % "5.2.0" | |||||
| )) | |||||
| } | |||||
| @@ -0,0 +1,2 @@ | |||||
| @@ -0,0 +1,176 @@ | |||||
| /** | |||||
| * This code supplements instructions.org | |||||
| * Once you've gone through the instructions you can do | |||||
| * whatever you want with it. | |||||
| */ | |||||
| package Ex0 | |||||
| import chisel3._ | |||||
| import chisel3.iotesters.PeekPokeTester | |||||
| import org.scalatest.{Matchers, FlatSpec} | |||||
| import TestUtils._ | |||||
| // class MyVector() extends Module { | |||||
| // val io = IO( | |||||
| // new Bundle { | |||||
| // val idx = Input(UInt(32.W)) | |||||
| // val out = Output(UInt(32.W)) | |||||
| // } | |||||
| // ) | |||||
| // val values = List(1, 2, 3, 4) | |||||
| // io.out := values(io.idx) | |||||
| // } | |||||
| // class MyVector() extends Module { | |||||
| // val io = IO( | |||||
| // new Bundle { | |||||
| // val idx = Input(UInt(32.W)) | |||||
| // val out = Output(UInt(32.W)) | |||||
| // } | |||||
| // ) | |||||
| // // val values: List[Int] = List(1, 2, 3, 4) | |||||
| // val values = Vec(1, 2, 3, 4) | |||||
| // io.out := values(io.idx) | |||||
| // } | |||||
| class MyVector() extends Module { | |||||
| val io = IO( | |||||
| new Bundle { | |||||
| val idx = Input(UInt(32.W)) | |||||
| val out = Output(UInt(32.W)) | |||||
| } | |||||
| ) | |||||
| val values = Vec(0.U, 1.U, 2.U, 3.U) | |||||
| io.out := values(io.idx) | |||||
| } | |||||
| class MyVector2() extends Module { | |||||
| val io = IO( | |||||
| new Bundle { | |||||
| val idx = Input(UInt(2.W)) | |||||
| val out = Output(UInt(32.W)) | |||||
| } | |||||
| ) | |||||
| val values = Array(0.U, 1.U, 2.U, 3.U) | |||||
| val myWire = Wire(UInt(4.W)) | |||||
| io.out := values(0) | |||||
| for(ii <- 0 until 4){ | |||||
| when(io.idx === ii.U){ | |||||
| io.out := values(ii) | |||||
| } | |||||
| } | |||||
| } | |||||
| class MyVecSpec extends FlatSpec with Matchers { | |||||
| behavior of "MyVec" | |||||
| it should "Output whatever idx points to" in { | |||||
| wrapTester( | |||||
| chisel3.iotesters.Driver(() => new MyVector2) { c => | |||||
| new MyVecTester(c) | |||||
| } should be(true) | |||||
| ) | |||||
| } | |||||
| } | |||||
| class MyVecTester(c: MyVector2) extends PeekPokeTester(c) { | |||||
| for(ii <- 0 until 4){ | |||||
| poke(c.io.idx, ii) | |||||
| expect(c.io.out, ii) | |||||
| } | |||||
| } | |||||
| class Invalid() extends Module { | |||||
| val io = IO(new Bundle{}) | |||||
| val myVec = Module(new MyVector) | |||||
| // Uncomment line below to make the circuit valid | |||||
| // myVec.io.idx := 0.U | |||||
| } | |||||
| /** | |||||
| * This goes a little beyond the example in exercise.org. | |||||
| * WrapTest is a simple wrapper that catches Unconnected wires | |||||
| * and prints them with a less scary stacktrace. | |||||
| * Additionally, we throw a RunTimeException instead of ??? for | |||||
| * similar reasons | |||||
| * | |||||
| */ | |||||
| class InvalidSpec extends FlatSpec with Matchers { | |||||
| behavior of "Invalid" | |||||
| it should "Fail with a RefNotInitializedException" in { | |||||
| try { | |||||
| wrapTester( | |||||
| chisel3.iotesters.Driver(() => new Invalid) { c => | |||||
| // Just a placeholder so it compiles | |||||
| throw new RuntimeException with scala.util.control.NoStackTrace | |||||
| } should be(true) | |||||
| ) | |||||
| } | |||||
| catch { | |||||
| case e: RuntimeException => println("all good!") | |||||
| case e: Exception => throw e | |||||
| } | |||||
| } | |||||
| } | |||||
| class SimpleDelay() extends Module { | |||||
| val io = IO( | |||||
| new Bundle { | |||||
| val dataIn = Input(UInt(32.W)) | |||||
| val dataOut = Output(UInt(32.W)) | |||||
| } | |||||
| ) | |||||
| val delayReg = RegInit(UInt(32.W), 0.U) | |||||
| delayReg := io.dataIn | |||||
| io.dataOut := delayReg | |||||
| } | |||||
| class DelaySpec extends FlatSpec with Matchers { | |||||
| behavior of "SimpleDelay" | |||||
| it should "Delay input by one timestep" in { | |||||
| wrapTester( | |||||
| chisel3.iotesters.Driver(() => new SimpleDelay) { c => | |||||
| new DelayTester(c) | |||||
| } should be(true) | |||||
| ) | |||||
| } | |||||
| } | |||||
| // class DelayTester(c: SimpleDelay) extends PeekPokeTester(c) { | |||||
| // for(ii <- 0 until 10){ | |||||
| // val input = scala.util.Random.nextInt(10) | |||||
| // poke(c.io.dataIn, input) | |||||
| // expect(c.io.dataOut, input) | |||||
| // } | |||||
| // } | |||||
| class DelayTester(c: SimpleDelay) extends PeekPokeTester(c) { | |||||
| for(ii <- 0 until 10){ | |||||
| val input = scala.util.Random.nextInt(10) | |||||
| poke(c.io.dataIn, input) | |||||
| step(1) | |||||
| expect(c.io.dataOut, input) | |||||
| } | |||||
| } | |||||
| @@ -0,0 +1,28 @@ | |||||
| package Ex0 | |||||
| import chisel3._ | |||||
| import chisel3.iotesters.PeekPokeTester | |||||
| import org.scalatest.{Matchers, FlatSpec} | |||||
| object TestUtils { | |||||
| def wrapTester(test: => Unit): Unit = { | |||||
| try { test } | |||||
| catch { | |||||
| case e: firrtl.passes.CheckInitialization.RefNotInitializedException => { | |||||
| println("##########################################################") | |||||
| println("##########################################################") | |||||
| println("##########################################################") | |||||
| println("Your design has unconnected wires!") | |||||
| println("error:\n") | |||||
| println(e.getMessage) | |||||
| println("") | |||||
| println("") | |||||
| println("##########################################################") | |||||
| println("##########################################################") | |||||
| println("##########################################################") | |||||
| } | |||||
| case e: Exception => throw e | |||||
| } | |||||
| } | |||||
| } | |||||