So I had to implement an SDRAM controller on a Xilinx chip. That's the MIG (Memory Interface Generator). First off - this is no simple wizard click click task, you must understand how the MIG works to get going. The MIG in the end just creates source files for you, there is no NGO files involved. It creates a completely open-source implementation. This has certain implications:
1. The source is compiled within your project, so any XST (Synthesis) settings you have will affect the synthesis of the MIG. For example, if you want your design to be optimized for area and the MIG needs to be optimized for speed then you better figure out how to create a black box.
2. It is not extremely friendly as the assumption is that you will open the source and edit what you need to. If you choose to have the MIG implement the clocking then you will have to provide GC (Global Clock) input pins for both the interface speed (266 MHz for example) and the 200 MHz IDELAY clock otherwise the code won't compile and you'll need to modify the source (most changes will be in the infrastructure file). If you choose to implement the clocks yourself than you'll have to provide the 4 clocks correctly phase aligned (phase alignment isn't required for the 200 MHz IDELAY clock) and sent through BUFGs (Global Buffers) into the top MIG module.
3. All EVM examples I've seen provide a different pinout than the defaults given by the MIG, and under most circumstances so will any new board. This is a pain in the ass, but not as bad as you might think. There are a number of tedious requirements for the MIG, ie "What is the column location of all DQS pins?", "What type of pin is each DQ - Master or Slave?", and "How many regions have DQS pins in them?". This is all annoying and can manually be updated by following the instructions in the MIG user manual (appendix about required UCF modifications), but the easiest method of updating this information is to create a MIG implementation. Make sure to mark all banks available for all functions (one of the last pages with a list of banks and checkboxes for address, control, and data). Take the created UCF file and remove all constraints except the pin locations. Update all pin locations manually to the pinout of your board. Open the MIG design again and choose the option "Update Design". Give it the modified UCF and let it create the full design again. It will update all constraints and source files, and spit out a new UCF with the correct pinout and all the required modifications.
4. The MIG provides 2 example trees, "user_design", and "example_design". To compile the designs you can use the ise_flow.bat found in the designated par folder. The example design allows you to provide just the SDRAM signals, the clocks, and the reset. The user design would require you to add your own user code.
The example design is best used to test that the MIG implementation works. The example design compiles into a self-contained testbench of the SDRAM. The easiest way I found to use the example design was to insert a chipscope core into it which shows the main application signals. The chipscope core is fairly easy to implement. Do Not Use the Debug Option in the MIG. Haven't really figured out what it's for, but I believe it shows you the calibration sequence which is not what you want to see. To get chipscope working you have to build a normal MIG design, update pinouts (as explained above), modify clocks, run through the XST stage in the ise_flow.bat and then start up chipscope inserter. After creating the core and exiting chipscope inserter call inserter.exe on the command line to insert the .cdc project and create a new ngc file (which contains the basic design and the inserted chipscope core). Finish the rest of the steps from ise_flow.bat with the new ngc (output from inserter.exe) It should finish the compilation and you will now have a design which will show you the signals from within the testbench when using chipscope analyzer. If you're lucky you'll see phy_init_done rise, and the accesses begin to take place.