|

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • Pete Mack
    Prophet
    • Apr 2007
    • 6697

    #16
    But at least some of them aren't using unrestricted learning the way rog-O-matic did. That really played by a highly optimized strategy, better than the very best players. It got home with Yendor something better than 1 in 3.

    Comment

    • Derakon
      Prophet
      • Dec 2009
      • 8820

      #17
      Originally posted by Pete Mack
      But at least some of them aren't using unrestricted learning the way rog-O-matic did. That really played by a highly optimized strategy, better than the very best players. It got home with Yendor something better than 1 in 3.
      What do you mean by "unrestricted learning"? Is that some kind of cheat mode, or an AI technique? All Google turns up is this meta-meta-analysis, which has to be some kind of joke.

      Comment

      • Pete Mack
        Prophet
        • Apr 2007
        • 6697

        #18
        Read the rogomatic article. The only thing the learning algorithm had to be taught was the rules and the basics of how the display works. All game play logic was determined via trial and error. No special rules for how to deal with encumbrance, only that loads of a certain weight slow you down.

        Comment

        • phylyc
          Rookie
          • Feb 2017
          • 6

          #19
          Wow, thanks a lot for the references! I guess I have to dig into the rogomatic before venturing off on my own course. I'm really curious how it's implemented and how it compares to the modern neural network approach.

          Comment

          Working...
          😀
          😂
          🥰
          😘
          🤢
          😎
          😞
          😡
          👍
          👎