Announcement

Collapse
No announcement yet.

Intel confirms end of 'Tick/Tock' release schedule

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel confirms end of 'Tick/Tock' release schedule

    Intel has confirmed (although quietly, in a '10-K filing' - basically a financial document) that it's no longer going to be sticking to it's traditional "tick/tock" release cycle.

    http://www.theregister.co.uk/2016/03..._intels_clock/

    Basically it seems they are hitting a wall in terms of reducing the process size. Not really a shock, you start to encounter strange quantum tunneling effects below a certain scale.

    So it seems that we'll be sticking at 14nm for a while yet, and the next release 'Kaby Lake' is not going to be a 'tick' or a 'tock' exactly. This all indicates more focus on optimisation of the processor design, rather than the scale of the manufacturing process.

  • #2
    Had to get to this point eventually I guess, when's AMDs next release due, and will it be another 9590 pointless experiment I wonder?
    Originally posted by Aaron
    I want those sweet cherries

    Comment


    • #3
      Not really sure, they have their new architecture 'Zen' due out towards the end of this year, but they have a lot of work regaining confidence among the desktop CPU market I think after 'Faildozer'.

      I just want them to get their IPC much more in line with intel really. You can have eleventy million cores, or virtual cores or whatever, and that's nice, but the reality is that true threaded programming is hard, and most real world software still just wants strong single-core performance and is fine with 4 cores. People have been blabbering about how all the software is going to go multi-core soon but it simply hasn't happened, and that's because the people doing the hype aren't programmers and don't have the faintest clue what they are talking about. Writing software to make use of multi-core architecture is hard, it's not just a case of 'load the multi-core library and recompile'.

      Comment


      • #4
        it's not just a case of 'load the multi-core library and recompile'.
        That goes a long way towards making best use of multicore!
        Very little software is written from scratch nowadays. Most of it consists of already written functions just plugged together in a different way to produce a different result. It is own to the compiler to decide which tasks can be run in parallel and hence use different cores.

        In any case the easiest way to use multicore CPUs is to to run more that one task/program at the same time and restrict each task to using just one core.
        So you have multiple tasks using each core, but no single task using multiple cores.
        I've not failed. I've just found 10.000 ways that don't work!
        Dave Burnett

        Comment


        • #5
          Originally posted by Burn-IT View Post
          That goes a long way towards making best use of multicore!
          Very little software is written from scratch nowadays. Most of it consists of already written functions just plugged together in a different way to produce a different result. It is own to the compiler to decide which tasks can be run in parallel and hence use different cores.
          That really only applies in the very basic case of large contiguous calculation blocks though. Which is why applications such as rendering are the ones which normally take advantage of multi-core first.

          Most real-world processing just doesn't work that way, and when you simply try to 'make it multicore' you very quickly run into all sorts of horrible scenarios involving race conditions which makes your software fall over when you try to simply 'make them go multicore'. For real-world software to use multicore architectures it generally needs to be designed to do so from the ground up, more or less. You need to start giving serious thought to thread scheduling, wait states, and all sorts of things which you don't normally need to do when writing applications; traditionally these concepts are more tied to operating systems. To some extent you need to build an operating system into your application.

          There is a reason why for example most games today still run on a single core. It's not like over the last 5 years no developer has thought to do #include makeitdomulticorelulz.h - it's not happened because it's really not that simple.

          Obviously yes you can benefit from multiple tasks using one core each. But really that only requires a couple of cores, and at any rate 4 is plenty.

          Comment


          • #6
            amd have been pushing the ground up approach big time in the games area. dx12 and vulcan are both going to be more core/thread sensitive and while ipc will still matter, it will soon matter less. just look at the latest news about their ability to fill the bubbles instead of causing choke points for eg. but till games let old engines die the meat of the new api will be sat going to waste.

            intel has been doing this tick, tick, tock since p55/x58 but it was able to almost hide it in plain sight.
            "Those able to see beyond the shadows and lies of their culture will never be understood, let alone believed, by the masses."
            Plato

            Comment


            • #7
              Originally posted by luke22 View Post
              while ipc will still matter, it will soon matter less
              Problem is, people said that 5 years ago when trying to justify faildozer, and nothing really has changed. What actual justification do you have for saying that 'it will soon matter less'? Personally I'm going to buy hardware built to deal efficiently with software which exists now, not for some pipe-dream software which people hope will come along later. Especially since as a programmer I understand why it hasn't happened in the last 5 years.

              I agree though that this is really just Intel admitting something it was already doing.

              Comment


              • #8
                for sure, i mean tbh it has been the same thing since the q66 came out. but times they are a changing.

                lets look at what drives games, and changes in games tech. consoles. they are the cash cow that allows game makers to go nuts every so often and release insane games. amd being behind the hardware and microsoft having a massive investment in the bone means they are best friends now. they're looking to streamline games for windows and cross platform scalabiliity with both platforms. which has the knock on effect of helping get the best out of amd hardware across the board.

                the thunderbolt gpu expender thing, i really doubt the pretty weak gpu cores in the consoles played 0 part in that venture. not to mention the good intentions it shows to intel who really still need a gpu partner....

                people keep seeing doom and gloom in amd future, i see them best positioned to really reap big rewards during the next 5 years off the ground work they have been laying since dropping the ati brand. for the brand with the weakest hardware they manoeuvred them selves into the top seat at the table when it comes to forcing api changes and gaining support from the os provider. before we talk about where they have aligned them selves ready for the vr revolution about to sweep THE WORLD!!!!!! /clarkson voice.
                "Those able to see beyond the shadows and lies of their culture will never be understood, let alone believed, by the masses."
                Plato

                Comment


                • #9
                  Originally posted by luke22 View Post
                  vr revolution about to sweep THE WORLD!!!!!! /clarkson voice.
                  http://i.imgur.com/SBNtkAX.gifv

                  Comment


                  • #10
                    Originally posted by andyn View Post
                    Problem is, people said that 5 years ago when trying to justify faildozer, and nothing really has changed. What actual justification do you have for saying that 'it will soon matter less'? Personally I'm going to buy hardware built to deal efficiently with software which exists now, not for some pipe-dream software which people hope will come along later. Especially since as a programmer I understand why it hasn't happened in the last 5 years.

                    I agree though that this is really just Intel admitting something it was already doing.

                    Cracking slice O reality that post
                    Originally posted by Aaron
                    I want those sweet cherries

                    Comment


                    • #11
                      you know she had no idea about that sticker xD



                      kinda related to amd/dx12/vr teh futures:

                      http://hexus.net/tech/news/software/...hown-new-video
                      Last edited by luke22; 24-03-16, 13:51.
                      "Those able to see beyond the shadows and lies of their culture will never be understood, let alone believed, by the masses."
                      Plato

                      Comment

                      Working...
                      X