Quantcast
Channel: Component – KitGuru
Viewing all 5499 articles
Browse latest View live

How Intel plans to reinvent the desktop

$
0
0

The desktop is dead, that’s something you’d think was pretty much cemented. Not for everyone of course, like those in corporate offices and certainly not for gamers – they need the space for those monstrous GPUs – but for the average consumer? Definitely.

Just look at the climate we’re in at the moment. The family PC is being replaced by personal smartphones and tablets and even those that do still use one often have a laptop propped up on the desk instead of a tower/monitor combination – it saves space and it’s mobile, for those not doing video editing or high-def gaming, that’s a perfect combination.

But Intel isn’t ready to see the age of the desktop computer end, it wants to reinvent it. The cynical among you are no doubt pointing out the fact that Intel has lost some ground to AMD in the mobile sector, despite maintaining its performance crown in the desktop CPU market. Sure, that’s probably why it’s so keen to bring back the desktop, but it does have some smart ideas. On top of that too, the desktop market, like the banks at the last economic collapse, is just too big to let it fail. With such a monstrous install base, Intel is hoping to re-purpose that audience, rather than try and capture a new one.

her
That, and Intel really wants to be our girlfriend

A big power play Intel is looking to make, is changing the face of the desktop. That means altering its form factor, its shape, size and look, to make it feel more like a contemporary product. In the same fashion as the sports cars of yesteryear look ancient when placed alongside the sleek, hybrid super cars of today, the old-guard desktops (that aren’t gamer orientated, or modded by our talented readerbase) just don’t fit in with a world of smart connected devices.

So Intel is looking to push all-in-one systems. Monitor, internal hardware, speakers, peripheral interface (touch), everything. This style is also going to be multi-user orientated, with multi-touch interfaces and large displays to encourage interaction.

However, more traditionally, Intel is also looking to push mini-PCs. Very small form factor, but powerful boxes, similar in some ways to a Steambox. The idea is to get PCs and specifically desktop PCs, into places they haven’t been before, partially by utilising smaller chassis and partly through innovative interfaces, like voice recognition, 3D gesture sensing cameras and bio-authentication.

To make sure these systems are ready to go on a moment’s notice too, Intel is also pushing its new Ready Mode technology, which is a new super-low power mode in its 4th gen iseries CPUs. Through software and board level optimisations, Intel’s partners should be able to create systems that are able to enter incredibly lower powered states, without shutting down or logging off, meaning they’re always ready for you, without using boat loads of power.

Ready Mode will also sync with 3rd party applications to make certain tasks automated. Entering WiFi range could have pictures automatically downloaded from your phone and stored on the desktop hub.

All of this though, is going to be powered by the new generation of processors.

Intel’s desktop CPU line will remain largely unchanged in hierarchy; Pentium and Celeron serve the entry level, Core i3 and i5 models offer an extra bit of grunt for the mainstream, the performance segment relies upon i5 and i7 chips, and Core i7 (namely the HEDT variant) will sit atop the rankings.

Keen to point out that Extreme Edition will live on, Intel showed a slide which outlines the current flagship (Core i7 4960X) part’s 20x performance gain over its 2003 ancestor – the Pentium 4 EE.

Haswell-E_550

Octa-core and DDR4 for the consumer market.

Keeping on the topic of the HEDT and Extreme Edition parts, 2014 will see commercial availability for Intel’s first eight-core desktop processors. Set to enter the scene in the second half of 2014 (many web sources suggest Computex in early June as a likely launch time frame) Haswell-E, as many enthusiasts currently know it, and the new X99 chipset will set the foundation for the first DDR4-supporting desktop platform.

Information surrounding the processors and the X99 chipset is still scarce, although a $1000 price tag for the flagship CPU seems a safe bet. Sources suggest that the flagship chip will feature 20MB of L3 cache, 40 PCIe 3.0 lanes, a quad channel DDR4 memory controller, and manufacturing using a 22nm process. The X99 chipset is likely to offer support for a larger number of SATA 6Gbps and USB 3.0 connections than X79. Other additions could include the M.2 storage interface.

Eight cores, a new HEDT chipset, and support for DDR4 memory. Although scepticism of Intel’s bold – reinvention – claims was understandable, the aforementioned parameters add belief to the chipmaker’s suggestions.

If Intel’s plans to reinvent the desktop are to come to fruition, the company understands the importance of delivering in each segment of the market.

Representing a ‘tick’ in Intel’s ‘tick-tock’ cycle, Broadwell will serve the mainstream market later this year. In typical fashion, Broadwell will feature a die shrink to a 14nm manufacturing process – down from the 22nm used by Haswell.

Iris-Pro

Iris Pro Graphics and 14nm manufacturing  – ‘tick’.

One of Broadwell’s key features is the inclusion of Intel’s Iris Pro graphics in unlocked desktop processors. Iris Pro has been stomping its authority in the notebook scene, proving its strengths in machines such as the MacBook Pro. The graphics hardware has also found its way into Intel’s desktop CPUs, such as the 4770R used in Gigabyte’s BRIX Pro. Irrelevant of the arguments for and against on-chip graphics, Iris Pro looks to provide a sizeable graphics performance upgrade over the current unlocked Haswell chips, which could be particularly useful in a SFF environment.

Launch dates for the 5th generation Intel core processors are still unclear. If previous Intel refreshes are anything to go by, the upcoming processors are likely to be launched alongside a new-and-improved chipset. Some companies have already started showing off their products based on Intel’s next generation mainstream chipset.

A focus on the desktop market wouldn’t be a fair claim if enthusiasts and overclockers were not included. Reaching out to the overclocking community, many of whom have been critical of Intel’s desktop CPU decisions in recent years, an update to the 4th generation Core processors is set for mid 2014.

Codenamed Devil’s Canyon, the updated processors will feature improved thermal interface material and updated packaging materials over their Haswell predecessors. Supported by the upcoming 9 series chipsets and geared towards overclocking, the Devil’s Canyon chips look set to build on one of Haswell’s (and Ivy Bridge’s) biggest flaws – its terrible contact between the silicon and heatspreader.

While Devil’s Canyon isn’t particularly big news with Broadwell and Haswell-E on the horizon, it does serve as an indication that Intel could indeed be serious about its focus on the desktop market – a category that overclockers and enthusiasts fall into.

The final processor-related announcement from Intel comes from a Pentium part. Celebrating twenty years of the Pentium brand, Intel will be releasing a Quick Sync-supporting, multiplier unlocked Pentium chip in mid 2014. The Pentium Anniversary Edition processor will drop inside 8 and 9 series motherboards.

So what does Intel’s latest press release tell us? It serves as proof that the desktop market is still big business, even if it does exist in an ever-evolving form. It also outlines Intel’s interest in the desktop market and how the company plans to reinvent it.

KitGuru says: The proof is in the pudding, so to speak. But if Intel does indeed deliver on many of its outlined promises, the future for desktop may not be as gloomy as one would be forgiven for thinking.


Exclusive Interview with Richard Huddy from Intel at GDC

$
0
0

When Lisa Graff, Intel VP for the Desktop Client Platforms Group, recently told KitGuru about her plans for 2014/2015, the focus was clearly on reinventing the desktop. As KitGuru reported a while back, there will be a resurgence on desktop for Intel going forward, which we believe will peak around 2017. To get a different take on this information, we caught up with Intel’s Gaming Guru, Richard Huddy who was just eyeing up the hotel’s perfectly rendered breakfast fruit when we spoke.

“It’s all about delivering credibility with the game developers”, Richard told us. “With CPU and GPU performance increasing dramatically over recent years, there has also been pressure to reduce power consumption, to create a much better mobile experience”. This drive toward better graphics on the go seems to have spurred a lot of innovation at Intel.

Richard-Huddy-GDC-Interview-KitGuru

Crytek to adopt AMD Mantle API for CryEngine

$
0
0

Advanced Micro Devices has announced that Crytek, a leading developer of video games and game engines, will adopt the company’s proprietary Mantle application programming interface for CryEngine. Once the latter gains support for Mantle, games based on CryEngine may get a performance boost on systems featuring AMD Radeon graphics based on GCN architecture.

CryEngine is a game engine that powers multiple titles across various platforms, including PCs, mobile devices and consoles. The engine utilises multiple application programming interfaces, depending on the platform. Going forward, Crytek plans to include support for Mantle API into CryEngine. The inclusion of AMD’s Mantle API will allow CryEngine licensees to use all capabilities of AMD Radeon graphics processing units based on GCN architecture and achieve higher performance levels with all image candy on.

“By integrating AMD’s new Mantle API, CryEngine will gain a dimension of ‘lower level’ hardware access that enables extraordinary efficiency, performance and hardware control,” said Cevat Yerli, founder, CEO and president of Crytek.

crytek_crysis3_4

AMD’s Mantle is a cross-platform API designed specifically for graphics processing units based on graphics core next (GCN) architecture (e.g., AMD Radeon R9, R7 and HD 7000-series). The main purpose of the new API is to allow game developers to access hardware on the low level and get higher performance because of the lack of limitations of the current APIs. According to AMD’s internal testing, Mantle can bypass all the bottlenecks modern PC/API architectures have, it enables nine times more draw calls per second than DirectX and OpenGL due to lower CPU overhead.

crytek_crysis3_4

Unfortunately, it is completely unclear when AMD and Crytek actually plan to implement Mantle API support into CryEngine. Moreover, it is uncertain whether Crytek and licensees of the engine will actually implement support of Mantle into existing video games to boost their performance. In fact, based on the planned list of CryEngine updates, which Crytek published this week, implementation of Mantle is not in the short-term plans of the developer.

“AMD is delighted to bring Mantle support to the enormous audience of gamers and game developers reached by Crytek’s CryEngine,” said Ritche Corpus, director of ISV gaming and alliances at AMD. “Together, AMD and Crytek are forging a path for the graphics industry that better utilizes gamers’ advanced AMD GPUs through ‘closer-to-the-metal’ API design.”

KitGuru Says: The announcement of Mantle support by a leading-edge game engine is, without any doubts, an important event for the whole industry. The questions are what the scope of the plan is and what the planned result is. In case Crytek wants to formally add some new effects to Crysis and/or release a DLC for the game with added performance/features while leaving full implementation for a future title (the scenario it took with DirectX 9.0c in 2004: it added certain effects to the FarCry and then properly implemented the new tech in late 2006 with the release of Crysis), then it will barely have any effect on the market in general in a short-term/mid-term future, especially if the same approach is taken by the current licensees of the engine. If Crytek and its partners start to implement Mantle support into products that are in the pipeline and which are due to be released relatively shortly, then the situation will be a whole lot different and will see high-quality games taking advantage of AMD’s API in the mid-term future.

Intel remains committed to 450mm wafers, but there will be a delay

$
0
0

Early last year Intel Corp. demonstrated the world’s first fully-patterned 450mm wafers and in mid-2013 started to construct D1X module 2 fab, which is supposed to act as a primary development facility for virtually all of Intel’s 450mm initiatives. It was expected that commercial production of chips on 450mm wafers would start around the middle of the decade, but now the plans have changed and deployment of 450mm facilities was delayed.

ASML, one of the world’s leading suppliers of chip-making equipment, said back in late 2013 that it would slowdown/stop investments into development of tools that process 450mm wafers. Intel, TSMC and Samsung hold a stake in ASML and can potentially influence its decisions. Since neither of chip manufacturers opposed the decision (at least publicly), it looks like there are certain technological problems with the 450mm systems that will take years to solve.

Nonetheless, Intel does not give up hope to use 450mm wafers to produce its chips. While the firm will not be able to do that in 2015 – 2017, it now hopes to initiate 450mm manufacturing later, perhaps, in 2018 – 2019.

“Our position for 450mm has not changed in that we expect deployment sometime in the later part of this decade,” said Intel spokesman Chuck Mulloy. “We adjusted our 450mm funding to ASML consistent with their plans. However, this is not a change in our (2014) capital forecast as it anticipated that change. We continue to work with our industry partners to align on timing.”

intel_450mm_wafer_demo_3

Intel’s first 450mm wafer. Image by X-bit labs.

Manufacturers of semiconductors can get 2.5 times more chips from a 450mm wafer than from a 300mm wafer, which reduces per-chip costs. However, 450mm equipment and 450mm fabs cost considerably more than conventional 300mm tools and factories.

A lot of the research and development to move from standard 300mm wafers to 450mm wafers is being done by the Global 450 Consortium of companies established by the New York state government Andrew Cuomo. The G450C includes Intel, IBM, GlobalFoundries and is based in Albany at the SUNY College of Nanoscale Science and Engineering.

KitGuru Says: While the development of 450mm process technologies is facing delays, there are no doubts that at some point in the future the world will need so many chips that the 450mm wafers will be a requirement. The economic situation can improve and accelerate the evolution to 450mm wafers, the only question is when this is supposed to happen…

Microsoft unwraps DirectX 12 for PCs, tablets, smartphones and Xbox

$
0
0

Microsoft Corp. on Thursday unveiled DirectX 12, the latest incarnation of one of the world’s most popular application programming interfaces. The new API can not only greatly improve rendering performance, but to also bring top-notch quality graphics to all Microsoft platforms, including personal computers, tablets/smartphones and even Xbox One.

One of the key innovations of DirectX 12 is that it will allow game developers to access hardware resources on a “close-to-metal” level, video games will benefit from reduced GPU overhead via features such as descriptor tables and concise pipeline state objects. In addition, the DirectX 12 will allow games to significantly increase multithread scaling and CPU utilisation. Finally, the new API also presents a set of new rendering pipeline features that will improve the efficiency of algorithms such as order-independent transparency, collision detection, and geometry culling.

microsoft_directx11_3dmark microsoft_directx12_3dmark
During the presentation of DirectX 12 the software giant demonstrated how the 3DMark benchmark recompiled for the new API can improve CPU scaling by up to 50%.

Another key aspect of DirectX 12 is that it now supports various mobile devices. The latter are hungry for additional graphics performance these days, but they also need longer battery life. By improving efficiency of various computations (and essentially their performance), devices can accomplish the job done faster and thus save battery life. Moreover, thanks to better use of multi-core CPUs and highly-parallel GPUs, mobiles can increase their overall performance, which means that portable devices will be able to do the same things as modern notebooks (expect PC and console games to be playable on smartphones and tablets).

Initially, DirectX 12 will be supported by AMD Radeon graphics hardware powered by the GCN architecture; Intel Core i-series processors code-named Haswell and Broadwell with Iris graphics inside; Nvidia GeForce graphics processing units featuring Fermi, Kepler and Maxwell architectures; Qualcomm Snapdragon mobile system-on-chips with advanced Adreno graphics.

DirectX 12 preview will be available this year, but the official release timeframe is unknown. According to Microsoft 50 per cent of PCs will support DirectX 12 at launch. What is unclear is whether there will be certain DX12 functions that the currently available hardware will not be able to support and there will be “native” DirectX 12 graphics processors.

microsoft_directx12_forza5

Microsoft has also demonstrated how Forza Motorsport 5 game, which is only available on Xbox One, can run at 60fps on a PC. Perhaps, that was not a hard thing to accomplish, given that the rig it was run featured Nvidia GeForce GTX Titan Black, one of the highest-performing graphics card today.

There are three key areas that should speed up rendering of 3D graphics in applications that rely on DirectX 12 API: pipeline state representation, work submission, and resource access.

Pipeline state objects. Direct3D 11 allows pipeline state manipulation through a large set of orthogonal object, which provides a convenient, relatively high-level representation of the graphics pipeline, however it does not map very well to modern hardware since modern GPUs re-use their building blocks for different tasks. The Direct3D 11 API allows these to be set separately, but the driver cannot resolve things until it knows the state is finalized, which is not until draw time. This delays hardware state setup, which means extra overhead, and fewer maximum draw calls per frame. Direct3D 12 addresses this issue by unifying much of the pipeline state into immutable pipeline state objects (PSOs), which are finalized on creation. This allows hardware and drivers to immediately convert the PSO into whatever hardware native instructions and state are required to execute GPU work (which PSO is in use can still be changed dynamically and much easier than in case of Direct3D 11). The result is significantly reduced draw call overhead, and many more draw calls per frame.

Command lists and bundles. In Direct3D 11, all work submission is done via the immediate context, which represents a single stream of commands that go to the GPU. Direct3D 12 introduces a new model for work submission based on command lists that contain the entirety of information needed to execute a particular workload on the GPU. Each new command list contains information such as which PSO to use, what texture and buffer resources are needed, and the arguments to all draw calls. Because each command list is self-contained and inherits no state, the driver can pre-compute all necessary GPU commands up-front and in a free-threaded manner. The only serial process necessary is the final submission of command lists to the GPU via the command queue, which is a highly efficient process. In addition to command lists, Direct3D 12 also introduces a second level of work pre-computation, bundles. Unlike command lists which are completely self-contained and typically constructed, submitted once, and discarded, bundles provide a form of state inheritance which permits reuse, which further improves efficiency and lowers the amount of data needed to be sent within a GPU.

Descriptor heaps and tables. Resource binding in Direct3D 11 is highly abstracted and convenient, but leaves many modern hardware capabilities underutilized. Direct3D 12 changes the binding model to match modern hardware and significantly improve performance. Instead of requiring standalone resource views and explicit mapping to slots, Direct3D 12 provides a descriptor heap into which games create their various resource views. This provides a mechanism for the GPU to directly write the hardware-native resource description (descriptor) to memory up-front. To declare which resources are to be used by the pipeline for a particular draw call, games specify one or more descriptor tables which represent sub-ranges of the full descriptor heap. As the descriptor heap has already been populated with the appropriate hardware-specific descriptor data, changing descriptor tables is an extremely low-cost operation. In addition to the improved performance offered by descriptor heaps and tables, Direct3D 12 also allows resources to be dynamically indexed in shaders, providing unprecedented flexibility and unlocking new rendering techniques. With dynamically indexable resources, a scene with a thousand materials can be finalized just as quickly as one with only ten.

KitGuru Says: What Microsoft says looks extremely good on paper. The industry needs to boost efficiency of processing, additional performance and close-to-metal approach for those developers who want to make breakthrough content. The thing is that DirectX 12 will still have to support hardware from at least four designers, which means that at least some operations may be done to fit all, meaning lower efficiency compared to proprietary technologies like Mantle. In case Microsoft manages to make DX12 nearly as efficient as Mantle, the latter will not be needed and will vanish into oblivion; but if AMD manages to further enhance efficiency of Mantle, it will have a chance to get popular on the PC, which will catalyse Nvidia to respond with its own proprietary API.

Transcend: SSD prices set to decline by 20 – 30 per cent this year

$
0
0

Adoption of solid-state drives has been growing rapidly in the recent years and will continue to do so this year, according to the chairman of Transcend. One of the main reasons behind growing popularity of SSDs is their declining prices, which will drop further in 2014.

Peter Shu, the chairman of Transcend, said at a news-conference in Taipei, Taiwan, that SSD prices were projected to drop another 20 – 30 per cent in 2014 after declining 30% in 2013 compared to the previous year. According to Mr. Shu, the demand for 256GB SSDs would surge if their average price falls to below $100, reports DigiTimes web-site.

transcend_ssd_msa720

Mini SSDs by Transcend

At present Transcend produces 100 thousand SSDs per month, half of the drives are intended for industrial usage, half of the solid-state storage devices are for consumer applications. The company hopes to boost sales of other NAND flash-based devices going forward since a number of smaller players have left the market.

What is interesting to note is that Mr. Shu believes that prices of NAND flash and DRAM memory will continue to drop this year due to oversupply.

KitGuru Says: As the competition on the market of SSDs intensifies, smaller players, manufacturers without huge capital backing and makers without own NAND flash manufacturing are facing increased risks. Once SSDs become commodity products, the number of players will dramatically decrease.

Native DirectX 12 games to emerge in late 2015 – reports

$
0
0

While Microsoft Corp. revealed yesterday that it will release preview version of its DirectX 12 application programming interface this year, its press materials did not disclose when to expect the final version of the API and games that take complete advantage of it. According to various media reports, it looks like the final software and applications on its base will only emerge in late 2015.

The new version of DirectX API will target not only traditional personal computers, but also mobile devices like smartphones and tablets as well as Xbox One. The DX12 will allow game developers to access hardware resources of graphics processing units on a “close-to-metal” level, which will result in higher efficiency and increased performance; in addition, the new API will allow games to significantly increase multithread scaling and CPU utilisation.

According to Shacknews and ZDnet, Microsoft plans to release the final version of DirectX 15 sometimes in late 2015, more than a year from now. Games that will take full advantage of the new API, which engines will be developed with “close-to-metal” approach in mind and will scale well on multi-core CPUs, will become available during the holiday season 2015.

It is hard to tell which AAA franchises will actually adopt DX12, given the fact that many of them are made in two-year or even three-year cycles, but there will probably be at least several titles featuring the DirectX 12 in late 2015.

microsoft_directx_logo

By late 2015 graphics and microprocessor hardware will be completely different than it is today. AMD will likely release a successor to its highly-successful GCN architecture as well as its Excavator x86 cores for central processing units. Nvidia will probably introduce its second-generation Maxwell architecture along with improved version of its Denver ARMv8 cores for mobile system-on-chips. Intel will also introduce all-new graphics and x86 cores. Qualcomm will release a new breed of Snapdragon chips with improved Adreno graphics and 64-bit general-purpose cores.

It is interesting to note that by late 2015 the software giant will also release an all-new version of Windows operating system, which will, without any doubts, have an effect on the market. Given the numerous changes in the DX12 compared to the DX11 (support of PC, non-PC and console hardware, CTM approach), this may well point to a new direction for Windows in general. In fact, if we look at the DirectX 12 through the “Windows in general” prism, we may well consider it a tip on an iceberg. Still, Microsoft did confirm that DX12 will work with Windows 8-based machines.

Keeping that in mind the fact that by 2015 there will be all-new computing hardware, Windows operating system and personal computers/mobile devices, do not expect AAA games from late 2015 to run on current-gen hardware smoothly in high resolutions with all eye-candy on. Still, they should run pretty well, thanks to the fact that cross-platform titles will be designed with Microsoft Xbox One and Sony PlayStation 4 in mind.

KitGuru Says: In the coming months Microsoft will release additional details about DirectX 12 and we will probably learn a little more about its innovations. It is clearly interesting to know whether DX12 supports various virtual reality gear, what’s the plan regarding Kinect 2 for PCs and what about technologies like AMD TrueAudio. The company is now naturally tight-lipped because information about the new API discloses its plans regarding Windows and Xbox One. But secrets do get disclosed or leaked.

New hardware will be needed to take fully advantage of DirectX 12

$
0
0

Although Microsoft Corp.’s DirectX 12 application programming interface will significantly improve performance of currently available hardware and most likely will enable new level of graphics quality, to fully benefit from the potential of the new API, new hardware will be needed, according to an Nvidia Corp.’s representative.

Tony Tamasi, senior vice president of content and technology at Nvidia, told the TechReport that the DirectX 12 will have a lot of new functionality beyond what was discussed last week. To support the new graphics technologies, new graphics processing units will be needed. The set of new features may still be under discussion these days, so some new technologies could be added.

Microsoft DirectX 12 will only be officially released in the late 2015, so by this time all leading designers of graphics processing units for PCs and mobile devices – AMD, Intel, Nvidia, Qualcomm, etc. – will release new breeds of GPUs.

microsoft_directx_logo_12

Up to now, Microsoft has revealed only two DX12 features that will need new hardware: new blend modes and the so-called conservative rasterization, which can help with object culling and hit detection.

So far Microsoft and its hardware partners have been concentrating on revealing what DirectX 12 can do for existing graphics processors as well as some general information. The new version of DirectX API will target not only traditional personal computers, but also mobile devices like smartphones and tablets as well as Xbox One. The DX12 will allow game developers to access hardware resources of graphics processing units on a “close-to-metal” level, which will result in higher efficiency and increased performance; in addition, the new API will allow games to significantly increase multithread scaling and CPU utilisation.

KitGuru Says: New software always tends to demand new hardware, it is not really a surprise. The question is why Microsoft decided to concentrate on advantages the DX12 can bring to the current-generation hardware and not on the brand-new features? Is adoption of AMD’s proprietary Mantle so rapid that it creates risks for the software giant’s DirectX? 


AMD’s socketed Athlon “Kabini” AM1 chips priced, pictured, unveiled

$
0
0

Early this month Advanced Micro Devices introduced its new low-cost desktop platform based on Kabini chip design. The new AM1 platform comprises microprocessors carrying Athlon and Sempron brands with integrated graphics cores as well as new mainboards with FS1b sockets previously only used for mobile applications. What AMD did not reveal in early-March were exact specs of new chips, their prices as well as some other minor details.

As the new Athlon processors are approaching the market, various web-sites, including CPU-World and Hermitage Akihabara, begin to reveal their prices as well as specifications. In the U.S., AMD Athlon 5150 and 5350 will cost $45 (£27.2) and $54 (£32.7), respectively. In Japan, the Athlon 5350 will be priced at ¥6700 (£39.6, $65.5), the model 5150 should cost ¥5680 (£33.6, $55.5), whereas Sempron 2650 and 3850 will be sold for ¥3700 (£21.9, $36.1) and ¥4200 (£24.85, $41), respectively.

amd_am1_athlon_sempron_specs_kabini

The new Athlon and Sempron accelerated processing units in AM1 form-factor based on Kabini design feature two or four AMD Jaguar x86 cores, AMD Radeon R3 graphics engines with GCN architecture and 128 stream processors, DDR3 memory controllers and so on. Like other Kabini-based platforms, the AM1 supports two Serial ATA-6Gb/s, two USB 3.0, eight USB 2.0, PCI Express 2.0 x4, DisplayPort, HDMI and D-Sub support.

At first glance, the price of AMD Athlon 5350 of $54 seems to be reasonable since it formally allows to assemble sub-$200 personal computers based on the chip. However, keeping in mind that it is possible to get higher-performing AMD “Trinity” or “Richland” APUs for $60 – $65 in the U.S. today, it does not seem that socketed AM1 chips are really valuable, at least in the “around $50” price-segment.

amd_athlon_kabini_socketed_am1_1

Obviously, AM1 chips require more affordable mainboards and less-efficient cooling compared to APUs in FM2 form-factor. But keeping in mind that AMD positions socketed Kabini as inexpensive solutions for those who want to upgrade later, then the AM1 APUs should either be considerably cheaper than APUs in FM2 package, or at least offer similar upgrade capabilities. At present the AM1 platforms offer neither decent upgradeability nor truly low price.

amd_athlon_kabini_socketed_am1

AMD did not comment on the news-story.

KitGuru Says: AMD’s partners will start to officially sell the chips in AM1 form-factor in April. Therefore, only next month we will be able to actually draw our final conclusion regarding the ultra-low-cost socketed desktop platform AMD is about to start offering. The current combination of price, performance and upgradeability just does not seem right. But maybe next month mainboard makers will roll-out ultra-low-cost motherboards, small form-factor systems or even system boards for all-in-one PCs that will make the AM1 truly valuable for system integrators?

Nvidia GeForce GTX Titan Z: the first $3000 consumer graphics card

$
0
0

Nvidia Corp. on Tuesday introduced its new GeForce GTX Titan Z graphics card at its annual GPU Technology Conference. The graphics board carries two fully-featured GK110 graphics processing units on board and provides ultimate performance for video games in ultra-high-definition resolutions as well as general-purpose compute apps. Its price is rather ultimate too: $3000 per unit.

“If you’re in desperate need of a supercomputer that you need to fit under your desk, we have just the card for you,” said Jen-Hsun Huang, chief executive officer of Nvidia Corp.

Nvidia GeForce GTX Titan Z is powered by two Nvidia GK110 graphics processors in their maximum configuration with 2880 stream processors, which gives the solution 5760 compute units in total to offer whopping 8TFLOPS of single-precision compute performance. The board is equipped with 12GB of GDDR5 memory (6GB per GPU).

nvidia_geforce_gtx_titan_z_huang

The new dual-chip graphics card looks similar to its predecessor, the GeForce GTX 690, at least from the cooling-system design standpoint. There are some other changes compared to previous-generation dual-GPU solutions: the graphics processors run at the same clock-rate at all times, which should eliminate performance bottlenecks and ensure proper operation of technologies like G-Sync.

nvidia_geforce_gtx_titan_z_small_1

The new dual-chip flagship graphics card from Nvidia should probably consume rather lot of power. However, before it starts doing so, it will consume a lot of your money first. Every unit will cost around $2999 (expect premium models to cost more) and will demand a decent CPU, power supply unit and mainboard.

Those, who want to create ultimate ultimate gaming rigs can easily do so by swapping the second GeForce GTX Titan Z inside the PC and enjoy the power of quad-SLI.

KitGuru Says: Graphics cards for gamers have been getting more and more expensive for over ten years now. Those, who want to really immerse into video games with ultra-high resolutions like 4K (3840*2160) or 5K (5120*2160) need to pay rather insane amounts of money to truly enjoy such titles in excellent quality. Nonetheless, $3000 per graphics board seems to be too lot. It is beyond any reasonable range that most of the PC gamers can afford and therefore the GeForce GTX Titan Z will be sold in extremely limited quantities. What is an absolutely normal business practice in the world of fashion (where a lot of things are hand-made), does not seem to be too practical for the world of high-tech, where volumes are crucial for many reasons.

Nvidia updates GPU roadmap: reveals Pascal GPU architecture

$
0
0

Nvidia Corp. on Tuesday publicly updated its graphics processing units roadmap for the coming years. The company removed its code-named Volta architecture from the plan and introduced Pascal architecture instead. The latter is believed to feature similar innovations to Volta, but to be even more significantly tailored for low power consumption and will sport certain interconnection technologies not available today.

Named for 17th century French mathematician Blaise Pascal, Nvidia’s next-generation family of GPUs will include three key new features: stacked DRAM, a key innovation that was promised for the Volta family of GPUs; unified memory technology that is currently only available on AMD’s accelerated processing units (APUs) with heterogeneous system architecture (HSA) capabilities (e.g., Kaveri), and NVLink, a high-speed interconnect for graphics processing units and central processing units. When graphics processing capabilities are concerned, Pascal will support DirectX 12 capabilities.

nvidia_pascal_roadmap

Nvidia roadmap. Image by LegitReviews

The stacked DRAM seems to become a key feature of next-generation GPUs due in 2016 and beyond from both AMD and Nvidia. Stacked DRAM allows to increase memory bandwidth and capacity without extending footprint or complexities of graphics cards. At present Nvidia expects two – four times increase of memory bandwidth and size with Pascal, but in reality there could be a much more significant boost. Recently announced HMC 2.0 devices (hybrid memory cubes, stacked DRAM devices designed by Micron and allies) have bandwidth of 480GB/s over 16-lane links. Four of such cubes could provide slightly south than 2TB/s of bandwidth.

The unified memory allows the CPU to access the GPU’s memory, and the GPU to access the CPU’s memory, so developers do not have to allocate resources between the two. Originally, this was a part of Maxwell architecture, but it looks like Nvidia removed the feature from the new family of chips. Keeping in mind that unification of memory requires collaboration between GPU and CPU designers, Nvidia’s technology is either a result of work with Intel and AMD (or just Intel), or there is a standard for CPU-GPU unified memory is incoming.

nvidia_pascal_nvlink_single_dual

Examples of NVLink implementation

The NVLink is pipe between GPUs and CPU, which has total bandwidth of 80GB/s. Given the bandwidth that hybrid memory devices provide, there is a clear need for such an interconnection in a couple of years from now. NVLink is claimed to be twice more energy-efficient than standard PCI Express 3.0. The developer states that in an NVLink-enabled system, CPU-initiated transactions such as control and configuration are still directed over a PCIe connection, while any GPU-initiated transactions use NVLink. This allows us to preserve the PCIe programming model while presenting a huge upside in connection bandwidth. Moreover, when connected to a CPU that does not support NVLink, the interconnect can be wholly devoted to peer GPU-to-GPU connections enabling previously unavailable opportunities for GPU clustering.

nvidia_pascal_nvlink_quad

An examples of NVLink implementation

The Santa Clara, California-based GPU developer has been designing its high-speed interconnect for GPUs and its general-purpose ARM-based cores with supercomputers in mind for several years now. Perhaps, NVLink is the first fruit of this work. However, Nvidia will need to ensure that its NVLink is compatible with Intel, AMD, IBM and other microprocessors to make it an industrial solution.

nvidia_pascal_module

To demonstrate what Pascal architecture is capable of, Nvidia showed off its Pascal module, a device akin to an MXM module that holds a high-performance GPU with on-package memory.

KitGuru Says: Like all things that are two years off, Pascal looks very promising and capable. However, it is so far away that we do not know what software it will run and what challenges will face. For example, with massive bandwidth of hybrid memory devices it should be an awesome solution for 4K gaming. But by the time it hits the market, the latter will become almost mainstream.

HP denies plans to introduce breakthrough 3D printers in June

$
0
0

Although last week Hewlett-Packard announced plans to introduce its first 3D printers this June, the company by now has retracted the statement. It appears that HP’s 3D printing tech is not yet ready for commercial introduction and will only be demonstrated by the end of October.

Chief executive of HP, Meg Whitman, announced at the last week’s annual meeting with investors, that the first 3D printers by HP would be released this June. The head of the company said that HP had solved a number of technical complications that had stalled broader adoption of 3D printing. Those technologies were projected to enable 3D printers with improved speed, accuracy, quality and so on.
However, this week the company quietly made an update to a post in one of its corporate blogs, where it informed the interested parties about inability to showcase its breakthrough 3D printing technology in mid-2014.

“During our Annual Meeting of Stockholders on March 19, HP answered a shareholder question about our 3D printing program and inadvertently stated that we would be making a technology announcement in June, when in fact we are planning to make that announcement by the end of our fiscal year,” the statement by HP, which was noticed by ComputerWorld, reads.

HP’s fiscal year ends on October 31, 2014.

3d_print_use

Hewlett-Packard estimates that worldwide sales of 3D printers and related software and services will raise to almost $11 billion by 2021 from a mere of $2.2 billion in 2012. At present the market of 3D printing is dominated by smaller players, but going forward a lot is going to change.

KitGuru Says: HP chose an extremely strange way to communicate about the delay of its 3D printing announcement. The launch of a 3D printer should have a material impact on HP, hence, the postponement of the launch should have an impact as well… Anyway, it looks like the 3D printing revolution from HP will happen later, not sooner.

AMD and HP quietly debut FX-670K microprocessor

$
0
0

From time to time large PC makers unveil hardware that chip companies like Advanced Micro Devices would rather not discuss with the general public. Recently it turned out that some of the desktops sold by Hewlett-Packard carry a strange microprocessor called AMD FX-670K, which does not seem to be exactly an FX-series offering.

Hewlett-Packard recently started to offer HP Pavilion 500-266ea PCs with AMD FX-670K accelerated processing unit with four code-named Piledriver x86 cores as well as Radeon HD 8670D graphics engine (according to HP’s documents). The chip has clock-rate of 3.70GHz, though it is unclear whether this is the maximum Turbo Boost frequency or default clock-rate, 4MB L2 cache and dual-channel DDR3 memory controller. However, unlike other FX processors, this one does not have 8MB third-level cache (a logical thing since it is based on Richland core), which affects performance in single-thread applications. Since the chip’s model number includes K-letter, the central processor should have unlocked multiplier to allow easy overclocking, which is a feature of all “unlocked” AMD FX chips.

amd_fx_670k_apu_cpu_z

It is interested to note that a member of Hardware Canucks forums got the processor and found out that while it uses FM2 packaging (just like all the other APUs based on Trinity or Richland designs), it does not feature integrated graphics. In case the graphics is truly not there (which may be a result of the fact that the chip is not fully supported by the BIOS of the mainboard it is installed into), then FX-670K should be considered as a product similar to FX-4300-series, but in FM2 packaging, without L3 cache and a number of other things.

Companies like AMD usually ship oddly named microprocessors and graphics cards to OEM partners due to requests of the latter. In fact, AMD widely offers Athlon X4 760K processor based on Richland core without graphics and with specs that are similar to the FX-670K chip.

amd_fx_chip

It is not a secret that the future of high-performance FX-series multi-core microprocessors is rather gloomy as Advanced Micro Devices’ roadmaps simply do not include any updates to them. Many believe that going forward AMD will offer cherry-picked FX-series chips with integrated graphics powered by the same cores as their mainstream accelerated processing units. In case the FX-670K is a processor that actually carries integrated graphics engine (which is not recognized by some mainboards), it looks like the company has already started to do so. In case AMD began to re-brand Athlons into FXes for HP, then it looks like the value of the FX brand is about to start getting lower.

KitGuru Says: Since Athlon and Sempron will now be used to market chips based on low-cost/low-power micro-architectures, to avoid confusion AMD might quietly discontinue various Athlons and Semprons in FM2/FM2+ packaging based on high-performance architectures, but without integrated graphics engines. However, adding essentially cut-down chips into the premium FX line seems to be a rather cynical decision.

Intel and Altera to co-develop 14nm multi-die devices

$
0
0

Intel Corp. and Altera this week said they would extend their collaboration to the development of multi-die devices that leverage Intel’s world-class package and assembly capabilities. The multi-die devices will feature Altera’s Stratix 10 FPGAs and SoCs made using 14nm process technology as well as other components. This will allow Altera to offer its solutions to broader range of customers.

Altera’s work with Intel will enable the development of multi-die devices comprising of Stratix 10 FPGAs [field programmable gate arrays] and SoCs [system-on-chips] with other innovative components, which may include DRAM, SRAM, ASICs, processors and analog components, in a single package. The integration will be enabled through the use of high-performance heterogeneous multi-die interconnect technology. Altera’s heterogeneous multi-die devices offer the benefit of traditional 2.5 and 3D approaches with more favorable economic metrics. The solutions will address the performance, memory bandwidth and thermal challenges impacting high-end applications in the communications, high-performance computing, broadcast and military segments.

Altera Stratix 10 is powered by quad-core 64-bit ARM Cortex-A53 processor system, complementing the device’s floating-point digital signal processing (DSP) blocks and high-performance FPGA fabric. The Cortex-A53 is among the most energy-efficient of ARM’s processors, which also delivers such crucial capabilities as virtualization support, 256TB memory reach and error correction code (ECC) on L1 and L2 caches. Furthermore, the 64-bit Cortex-A53 core can run in 32-bit mode, which will run operating systems and code written for ARMv7 chips unmodified, allowing a smooth upgrade path from Altera’s 28nm and 20nm SoC FPGAs.

altera_intel_agreement_hq1

“Our partnership with Altera to manufacture next-generation FPGAs and SoCs using our 14nm tri-gate process is going exceptionally well,” said Sunit Rikhi, vice president and general manager of Intel Custom Foundry. “Our close collaboration enables us to work together in many areas related to semiconductor manufacturing and packaging. Together, both companies are building off one another’s expertise with the primary focus on building industry-disrupting products.”

In case Intel and Altera are keeping the original schedule, then the latter already has working samples of its Stratix 10 chips made using 14nm process technology. While we do not know when the two companies plan to make the Stratix 10 and multi-die devices on its base, it looks like they are more or less satisfied with performance and quality in case they decided to assemble multi-chip packages.

KitGuru Says: Looks like Intel’s foundry business is developing rather fine. Perhaps, there are no tens of customers, but at least Altera seems to be rather happy with Intel’s 14nm process, which has not been deployed for Intel’s own commercial production.

Intel 730 Jackson Ridge 240GB SSD RAID 0 Review

$
0
0

KitGuru reviewed the Intel 730 Jackson Ridge Solid State Drive back on March 6th, and there was a lot of interest in the review. Since then Intel sent us over two more 730 drives and today we supplement our original findings by adding some RAID 0 results.

730-Series-Angle
As we covered in the earlier review the new 2.5 inch Solid State Drives use a specially qualified 3rd generation Intel controller, the same 20nm NAND flash memory that was used in the S3500, alongside an optimised firmware. Intel have overclocked the controller by 50% and the NAND bus has been tweaked by 20%.

Intel have released the 730 in 240GB and 480GB capacities and there is quite a difference in rated write performance between the two units.

Sustained Sequential Reads / Writes
240GB: up to 550 / 270 MB/s
480GB: up to 550 / 470 MB/s

Today we put two of the 240GB drives into a RAID 0 configuration and see how the performance scales. The 240GB is significantly slower than the 480GB version when looking at sequential write performance – down from 470 MB/s to 270 MB/s. Read speed is the same, rated at 550 MB/s.

730 SSD 4K IOPS performance is rated at 86,000 read and 56,000 write for the 240GB model and 89,000 read and 74,000 write for the 480GB model. Again the 240GB unit suffers a noticeable performance penalty.

The new 730 Solid State Drive is based on the Intel PC29AS21CA0 controller which is found inside the Intel DC S3500 and S3700 products. Both of these drives are designed with the server market in mind, delivering consistent write performance. The higher cost S3700 has been designed specifically to deal with extremely taxing write based workloads. It is able to deliver 10 full drive writes every day, for five years.

The 730 is based on these drives, but as a consumer model it is using 20nm ONFI flash memory. The controller has been overclocked from the 400mhz speeds in the server models, to a final clock speed of 600mhz. NAND bus speeds have also been increased from 83mhz to 100mhz.

The 730 is also protected with an impressive five year warranty – covering 70GB of data transfer each day across the time frame. If you are moving a lot of data around every day and demand the highest levels of reliability, then this is a very strong selling point.


AMD FirePro W9100: Hawaii GPU goes to work with 16GB of GDDR5

$
0
0

Advanced Micro Devices on Wednesday introduced its first professional graphics card based on the code-named “Hawaii” graphics processing unit. The new solution for CAD, CAM, DCC and other professionals boasts unbeatable DP FP compute performance in addition to whopping 16GB of memory and six 4K display outputs.

As expected, the AMD FirePro W9100 professional graphics card is based on the code-named Hawaii XT graphics processing unit in its advanced configuration with 2816 stream processors (which also suggests that it has 176 texture units, 64 raster operating units and 512-bit memory bus, but we cannot confirm that). The graphics board is equipped with incredible amount of GDDR5 memory with ECC capability, 16GB, the world’s first for a single-chip graphics solution. The novelty also features six mDP 1.2 ports that can support up to six 4K (3840*2160) displays.

amd_fire_pro_w9100_intro

AMD states that the FirePro W9100 professional solution features over 5TFLOPS of single-precision compute power and over 2TFLOPS of double-precision compute power (AMD claims that the Hawaii XT GPU is configured in such a way that to enable half of SP rate in double-precision computations). Exact clock-rates of the FirePro W9100 are unknown, so are the exact peak performance rates.

amd_fire_pro_w9100_luxmark

Modern professional workflows include not only design itself, but also simulation as well as visualization, which is why AMD and Nvidia not only offer high performance in professional graphics applications, but also in simulation and visualization programs. As a result, GPGPU performance gains importance in general.

amd_fire_pro_w9100_sandra_fp64

According to AMD, the FirePro W9100 professional graphics solution significantly outperforms rivals like Quadro K5000, Quadro K6000 as well Tesla K20X in Luxmark 2.0 OpenCL benchmark in both single-GPU and multi-GPU configurations. The W9100 is also well ahead of the K6000 in SiSoft Sandra FP64 benchmark.

amd_fire_pro_w9100_solidworks

Based on data from AMD, the FirePro W9100 is also ahead of its rivals in SolidWorks 2014.

AMD FirePro W9100 will be available in April. Pricing will be announced at that time.

KitGuru Says: It is interesting to note that AMD decided not to introduce other new professional graphics solutions based on Hawaii graphics processor at this time. The company clearly needs a “less high-end” FirePro W8100 featuring Hawaii as well as Radeon Sky series for cloud computing. It does not look like AMD plans to refresh its FireStream family of GPGPU accelerators for technical computers as it has not done that for four years now.

WD debuts My Passport Pro dual-drive solution with Thunderbolt

$
0
0

Western Digital Corp. has announced its first portable storage solution featuring Thunderbolt interconnection. The My Passport Pro dual-drive storage device has user-selectable RAID functionality to deliver needed performance or reliability for the most demanding applications in the field, without the need for power adapters or extra cables.

WD MyPassport Pro is available in 2TB or 4TB capacities and incorporates two 2.5” hard disk drives. Unlike some competing solutions, the MyPassport Pro lets users choose data striping (RAID 0) for high performance or mirroring (RAID 1) for data redundancy, which allows it to address different needs of different users. The external storage is powered using the Thunderbolt cable.

wd_mypassport_pro_2

According to WD, maximum read/write performance of the MyPassport Pro is 233MB/s, which is a lot for a hard-drive (and is below what even the low-cost solid-state drives can offer). However, it is well below the speed the Thunderbolt interconnection can provide (up to 800MB/s). Therefore, it is not completely clear why WD decided to use the TB interface instead of USB 3.0 that is available more widely.

My Passport Pro will be available immediately at Apple and major consumer electronics retailers and e-tailers as well as online. Pricing for the 2TB My Passport Pro is $299.99 (£179) and the 4TB model is $429.99 (£259).

wd_mypassport_pro_3

KitGuru Says: The WD My Passport Pro external storage solution seems to be a decent portable device, which could ensure either high performance or maximum reliability. Its main drawback is, ironically, its main feature: the Thunderbolt interconnection. The latter is a fast, easy to use and reliable technology, but it is not yet widely available. As a result, the WD My Passport Pro is incompatible with loads of PCs that feature USB 3.0, but do not sport Thunderbolt. Moreover, with 233MB/s maximum bandwidth, the Thunderbolt is simply not needed for the WD My Passport Pro.

Gigabyte GTX780 Ti Windforce OC Review (1080p, 1600p, 4K)

$
0
0

Nvidia’s GTX780 Ti has been a big seller in 2014, leading the high end performance market. We have reviewed many GTX780 Ti’s since the official Nvidia release last November, but due to reader demand today we look at one of the fastest overclocked models from Gigabyte. Their Windforce OC model is clocked at a whopping 1,020mhz out of the box. How does it handle at Ultra HD 4K resolutions?

first page2
Gigabyte have fitted a special version of their triple fan Windforce cooler to the GTX780 Ti. Gigabyte name this particular cooler the WindForce 3x as it incorporates a unique ‘Triangle Cool’ methodology. The company are using three quiet PWM controlled fans, a huge ram heatsink, two 8mm heatpipes and four 6mm heatpipes.

Ref Nvidia GTX780Ti Ref Nvidia GTX780 Ref Nvidia GTX Titan
GPU GK110 GK110 GK110
Technology 28nm 28nm 28nm
Transistors 7.1Bn 7.1Bn 7.1Bn
ROP’s 48 48 48
TMU’s 240 192 224
CUDA Cores
2880 2304 2688
Pixel Filrate 42.0 GPixel/s 41.4 GPixel/s 40.2 GPixel/s
Memory Size 3GB 3GB 6GB
Texture Filrate 210.2 GTexel/s 165.7 GTexel/s 187.5 GTexel/s
Bus Width 384 bit 384 bit 384 bit
Bandwidth 336 GB/s 288.4 GB/s 288.4 GB/s
GPU clock speed 876mhz 863mhz 837mhz
Boost clock speed 928mhz 902mhz 876mhz
Memory clock speed 1,750mhz 1,502mhz 1,502mhz

The Gigabyte GTX780 Ti Windforce OC is clocked much higher than the reference GTX780 Ti. The GK 110 core speed has been increased from 876mhz to 1,020mhz. This is the highest clocked GTX780 Ti that has entered our labs. The GDDR5 memory is running at 1,750mhz (7Gbps effective).

Today we test hardware with a 30 inch Apple Cinema HD display (2,560×1,600) and with the ASUS PQ321QE Ultra HD 4K Monitor (3,840×2,160).
DSC00665
The 4K ASUS PQ321QE panel retailed last year at a whopping £2999.99 asking price, but as we predicted this has dropped in 2014 to £2,279.99 inc vat. We expect further price cuts in the coming months.
image001
Today we test using the latest 335.23 Forceware drivers.

Micron joins OpenPower, set to co-develop next-gen datacenters

$
0
0

Micron Technology, one of the world’s leading makers of dynamic random access memory (DRAM), NAND flash memory and various storage solutions for enterprise datacenters, on Friday said it had joined the OpenPower foundation, an open development community based on the Power microprocessor architecture, as a platinum member.

The OpenPower foundation, which was established back in August 2013, aims to develop advanced server, networking, storage and speeding up technology aimed at bringing more choice, control and flexibility to developers of next-generation, hyperscale and cloud data centers. The organization is developing its innovations around IBM Power microprocessors and Power architecture.

Micron joins a growing roster of technology organizations working collaboratively to build advanced server, networking, storage and acceleration technology. Being a leading developer of solid-state storage solutions for the enterprise and datacenter, Micron is interested in ensuring that its future products are compatible with new machines developed by the OpenPower.

In addition, Micron recently announced its own Automata processing technology, a programmable silicon device, capable of performing high-speed, comprehensive search and analysis of complex, unstructured data streams. Micron is clearly interested in driving Automata into next-generation datacenters as well as supercomputers.

ibm_power_db2_server

“Participating in the OpenPower Foundation provides a great opportunity for Micron to help drive a new and exciting collaborative development model,” said Robert Feurle, Micron’s vice president of marketing – compute & networking. “This technology platform will broaden innovation and create greater choice for our customers.”

The OpenPower Foundation is highlighting four key areas for development: system software, application software, open server development platform and hardware architecture. Current focus is on software development and preliminary hardware design.

“The goal of the OpenPower Foundation is to open up the Power architecture in a way that fosters collaboration and accelerates new innovations in computing,” said Doug Balog, general manager of IBM Power systems. “Micron’s deep experience in memory and storage innovations will help the foundation reach this goal.”

KitGuru Says: IBM managed to make Power truly successful on the markets of mission-critical and high-performance computing in the past, but at present Power’s market share is on the decline. The x86 architecture is slowly but surely gaining high-end server market share and it does not seem like there is a way to stop its expansion anyhow, but by offering a competing eco-system. The OpenPower foundation aims to do just that and it looks like Micron believes that the organization has enough power to create a new eco-system.

AMD to launch dual-chip Radeon R9 295 X2 in coming weeks – report

$
0
0

Earlier this week Nvidia Corp. rather unexpectedly unveiled its GeForce GTX Titan Z dual-chip graphics card that carries two GK110 graphics processing units. At present, the novelty, which is yet to emerge on the market, is formally the world’s highest-performing graphics solution for consumers. Nonetheless, there is a rival coming the GTX Titan Z way and it is approaching fast.

Advanced Micro Devices plans to roll-out its next-generation dual-chip Radeon R9 295 X2 graphics card with two code-named Hawaii GPUs onboard as early as in the coming weeks, in the first half of April, according to a report from OverClockers.ru. The company intends to start sales of the product a little after the formal announcement, but do not expect the gap between the introduction and actual availability to be a long one.

amd_club3d_radeon_hd_7990_1

At CeBIT 2014 trade-show AMD demonstrated its reference design Radeon R9 295 X2 graphics card with two Hawaii graphics chips, according to a media report. The graphics board clocked GPUs at 1GHz and used a liquid cooling system akin to that of Asus Ares II dual-GPU solution. It is unknown whether the commercial Radeon R9 295 X2 will also feature similar cooling system, or will stick to a more traditional solution.

asus_rog_ares2

At present, nothing particular is known about configuration and performance of the Radeon R9 295X2. However, some sources close to AMD have suggested that the new flagship would offer better price/performance ratio than Nvidia GeForce GTX Titan Z. Keeping in mind that the latter costs $3000, achieving better price/performance ratio should not be that hard. Moreover, given that it is unlikely that the Titan Z operates at high clock-speeds, the Radeon R9 295X2 may end up being a faster solution…

AMD did not comment on the news-story.

KitGuru Says: Unlike in the case of the GeForce GTX 690 back in 2012 (when AMD did not offer a rival till mid-2013), this time AMD is not only going to show up at the fight against Nvidia’s dual-chip flagship, but it looks like the company plans to win the battle. We’ll see the result in a couple of weeks.

Viewing all 5499 articles
Browse latest View live




Latest Images