Supplier of Apple Will Build $10 B Factory in US

foxconn 10 b

Supplier of Apple, Foxconn, also the world’s largest electronics contract manufacturing company, has just announced that it will create a new factory to manufacture LCD screens in Wisconsin, USA.

The news was announced at an event attended by Foxconn boss Terry Gou and US President Donald Trump, who aggressively encouraged local and foreign companies to invest in the US.

According to The Verge report, Foxconn plans to spend USD10 billion within 3 years to make of 20 thousand feet large building, which will employ at least 3,000 people at the beginning of its opening.

The Trump Administration said the Foxconn investment could create jobs for up to 13,000 people. Going forward, Foxconn could make new plants elsewhere in the US such as Illinois, Indiana, and Michigan.

The LCD screen manufactured at Foxconn’s new factory is made for TV makers, Sharp. Foxconn parent company, Hon Hai, does have a stake in Sharp, which they bought for $ 3.5 billion last year. Given that the US is Sharp’s largest market, it is not strange that Foxconn built a factory in the country. That way, they can reduce freight costs.

The announcement from Foxconn comes a day after Trump boasted in an interview with The Wall Street Journal that Apple CEO Tim Cook has pledged to build “three big factories” in the US.

However, it remains unclear whether Apple will indeed build a factory in the US or Apple supplier that will build a factory in the US. Apple declined to comment when asked regarding the conversation between Cook and Trump.

New Windows Preview Can Connect Android to PC

windows preview

In mid-May, Microsoft explained its new strategy in the mobile industry. After failing to attract consumers with Windows Phone, Microsoft wants to attract the hearts of iPhone and Android users with various applications made.

Now, Microsoft lets beta testers Windows 10 to try new features on the latest Windows 10 preview version, build 16251. Android users will be able to try the “Microsoft Apps” app that will connect the phone to the PC. According to The Verge report, at this time, the feature can only be used by Android users. It remains unknown when Microsoft will launch similar features for the iPhone.

This application will allow users to take advantage of various features between devices, including the ability to share links from mobile devices directly to Windows 10. This is the first feature that Microsoft created to bring Android and iOS connections to Windows 10.

Going forward, the company from Redmond also wants to create a feature that allows users to move data from their phone directly to a PC, including features to copy content from the universal clipboard.

In addition to the features to connect Windows 10 to the mobile operating system, Microsoft also provides new features on the Cortana. Now, you can see Cortana search results in the menu, without having to open an internet browser.

Apple shut down iPod Nano and Shuffle

ipod touch nano suffle

Apple decided to focus on selling one iPod product, the iPod Touch. The decision was realized by Apple to stop selling iPod Nano and Shuffle on its official website.

According to Phone Arena reports, the information has been confirmed by Apple, and the Cupertino origin company also claims to have made some changes to its official website in response to the decision. But the discontinuation of iPod Nano and Shuffle sales will only last for a while.

Meanwhile, for Apple Touch, Apple rolled out its first update since 2015. The device now offers versions with 32GB of internal memory for USD199 (Rp2,7 million), and the 128GB version for USD299 (Rp4 million). Previously, there is also a 16GB and 64GB internal memory version offered at the same price.

Apple iPod Touch comes with a 4-inch screen and carrying a resolution of 640 x 1136 pixels. This device is powered by a dual-core Apple A8 chipset with M8 movement co-processor and 1GB of RAM. On the back, there is an 8MP camera with f / 2.4 aperture while in front there is a 1.2MP camera with f / 2.2 aperture. Apple equips the iPod Touch with a capacity of 1043mAh battery as a source of energy devices.

The original iPod is a version of MP3 music player by Apple and is the first product that makes Apple a star in the consumer product line as it is today. Not long after, Apple announced the latest product range from this line of iPods, including iPod Nano and iPod Shuffle. iPod Touch was first introduced on September 5, 2007, sometime after Apple launched the iPhone.

CPU socket TR4: AMD Threadripper requires new CPU coolers

Noctua cooler for base SP3 and TR4

In a few days, AMD will release the new Threadripper CPUs. So far, only a few information about the question of compatible CPU coolers have been leaked. The already available images of X399-Mainboards show, however, that in most cases, no mounting kit is sufficient for TR4, but new CPU coolers are needed.

Threadripper fuses two processors

In contrast to the competitor Intel, AMD for the high-end desktop segment (HEDT) does not rely on larger dies, which unite more cores on a chip, but by so-called infinity fabric, two zeppelin dies together. These, in turn, form the basis of the current Ryzen processors for the AM4 socket. Threadripper for private users is half an Epyc server processor, which combines four of these single dies as a multi-chip module. This results in significantly larger processors, as pictures of the upcoming X399 motherboards show.

The corresponding socket TR4 has been greatly enlarged and modified in comparison to AM4: Instead of the pin grid array (PGA) of the mainstream Ryzen, a land grid array (LGA) is used as known from Intel Instead of the CPU with pins. The comparison with Intel’s base-2066 motherboards also reveals that AMD also sets a well-known principle in cooling mounting: a stabilizing backplate is used to connect the base to the motherboard.

Drill holes and CPU size

While the drill holes for the assembly of CPU coolers are arranged in a rectangle in AM4, TR4 uses a trapezoidal arrangement of the receptacles for cooling. It can already be seen from the first mainboard images that current CPU coolers without additional equipment can not be compatible with Threadripper. In addition, AMD and its multichip module is an immensely huge processors whose heat spreaders are not completely covered by many current CPU coolers.

Noctua has already shown a prototype for TR4 at Computex, which is based on a well-known manufacturer’s model but is equipped with a significantly larger base plate for threadrippers. Upon request, the manufacturer states that upgrade kits for existing processors are deliberately not planned.

“We will not offer an upgrade kit for existing models in this case as the cooling capacity is simply not good enough due to the insufficient coverage of the CPU to meet the demands of our customers and, of course, ourselves, a Noctua cooler.” – Noctua

There is also an exception

Despite the different mounting to AM4, Arctic has recently announced, by press release, that the Liquid-Freezer (Test) series water chillers will be compatible with the new TR4 base without additional mounting equipment. More than this information is not available – but Arctic indirectly confirms the guessing of the rumored kitchen that either the mainboards or the processors must be attached a mounting frame that connects the AiO to the base. Since Arctic with the Liquid-Freezer coolers is a buyer of the order- maker Asetek, it can also be assumed that compact water coolers based on the same design can also be used together with Threadripper.

The current Asetek compact water coolers use a relatively large CPU support surface which is not completely covered even by large socket-2011 processors. Therefore, it is possible to use the even larger threadripper CPUs. Air coolers generally have a smaller base plate, which is why new developments are required as announced by Noctua. Due to the high presumed TDP of 180 watts for a threadripper processor are mainly larger tower coolers to be expected.


The product range of the Arctic Liquid Freezer reveals what has not yet been communicated: In the support area, under the packaging container of the compact water cooling system, the TR4 kit, which consists of a new mounting frame as well as screws and nuts, is supplied together with the CPU , AMD, therefore, ensures the usability of the Asetek compact water coolers with its upcoming HEDT processors and does not allow this to be implemented by mainboard manufacturers.

AMD Showcases Ryzen Threadripper Box, Larger than Mini-ITX PC

AMD will release Ryzen Threadripper which will rival Intel's high end desktop Skylake and Kaby Lake X processors

AMD will soon release its highest-grade processor, the Ryzen Threadripper, in the near future. Recently, they showcased the processor sales package box.

Through his official Facebook account, AMD showcased a box with the Ryzen Threadripper logo. One of the most interesting is the size of the box is very large. The box size is even bigger than some Mini-ITX PCs.

The question is what is in the box and will AMD provide a liquid cooling system (watercooling) in its sales package? Unfortunately, until now information about it is still unknown.

AMD CEO, Lisa Su holds the Ryzen Threadripper sales box (photo: special)

Ryzen Threadripper is a new AMD processor that will appear to rival the range of HEDT processors (High-End Desktop) Intel Skylake-X and Kaby Lake-X. This processor will come with configurations up to 16 cores 32 threads, and will be powered by the latest X399 chipset from AMD.

Not only that, as an enthusiast grade processor Ryzen Threadripper also has supported the use of quad-channel DDR4 memory up to 2TB and has 64 PCIe lane. That is, you can install 4 graphics card with 16 + 16 + 16 + 16 lane configuration.

Ryzen Threadripper will be exhibited in the event SIGGRAPH 2017 through a special event called CAPSCAIN. At the event, AMD will also introduce Radeon RX Vega graphics card for the first time in Los Angeles, United States.

Amazon Launches Spark Shopping Network Application

In an effort to promote more product reviews and an increase in online users, Amazon has paved the way for online shopping vendors to integrate with the social media industry.

Amazon Inc. has launched a social feature called Spark. This became the first retail giant’s ledge to plunge into the world of social media.

Spark allows members to showcase and purchase products on their platforms. However, Spark is currently only available to premium members paying Amazon premiums. Members will be able to share photos and videos like Instagram and Pinterest.

This new feature was launched on Tuesday (7/17) to be used on mobile roles with Apple’s iOS operating system.

Spark users can tag or flag products that are available on Amazon and anyone looking around can instantly find and buy them on the platform. An Amazon spokesman said Spark was made to allow customers to find and shop, tell stories and collect customers to choose what they liked.

When customers first visit Spark, they choose at least five interests they want to follow. It becomes a reference for Amazon to create a feed of relevant content.

“The customer saves the desired product by tapping the image with a shopping bag icon,” he said.

Many Amazon users in social media call the service as a cross between Instagram and Pinterest with a touch of e-commerce. To promote Spark, Amazon took influencers and bloggers to post Spark.

Amazon shares closed up 0.2 percent at 1,026.87 dollars on Wednesday

The risks of server virtualization

Server virtualization can bring many benefits to companies. But there are cost and operational risks that IT managers should know. An overview.

Server virtualization is considered a universal weapon against sprawling IT costs and complex server structures. However, the shadow aspects of technology and their use are often overlooked. This is because virtualized infrastructures bring their own difficulties – from security problems to high requirements for disaster recovery and disaster recovery, to difficult licensing issues. The potential benefits such as cost savings and flexible IT deployment can only be achieved by those who know the risks and avoid the typical traps.

Case 1: Old wine in new tubes

The beginning is that many of the laws governing physical machines and servers, each of which was responsible for a single application, no longer apply in virtualized environments. Many of the tools, organization, and pre-virtualized thinking are no longer compatible with modern environments. The technology alone – that is, the hypervisor – can not cause a revolution in the data center and almost save costs in the past. Put on a short formula: virtual is not physical.

A hypervisor does not make a modern IT environment
While the decoupling of the servers from the hardware offers considerable simplifications and more flexibility, the requirements for intelligent management and automation of the new environment are increasing:

VM Flotation : The ease with which servers are moved to virtual machines (VMs) often results in more administrative tasks because there are suddenly more (virtual) servers present than before and thus the sheer mass of the systems are managed got to. Related problems: missing overview – in some cases, virtual machines can only be identified at the filename or a few meta information -, excessive consumption of storage space, update and care needs. Organizational measures and policies should take effect from the outset to prevent this. In addition, organizations should evaluate appropriate tools for managing virtual environments to take technical support.

New tools and processes : New options such as flexible provisioning, image retention, or VM migration from one machine to another require appropriate tools and new policies to ensure control over the entire system. Without standardization of processes and environments, nothing works.

Failure risk : Dependency on a few servers is vulnerable. Virtualized environments can be used by many servers to combine little hardware. This implies that many servers fail when a computer fails. Therefore, high demands are placed on the equipment and mechanisms for high availability. As a basis, IT organizations need shared storage in the form of a SAN (storage area network), usually based on Fiber Channel or iSCSI techniques. The storage network must also be designed to be highly available.

Storage in mind, virtualize storage

Server virtualization can not succeed without a sophisticated storage system. Local hard disk space is not enough for demanding systems that meet the requirements of high availability, dynamic load distribution and high-level automation. Shared storage is an indispensable prerequisite. The requirements for such memories are reliability and failure safety, but also flexibility and cost control. Especially in virtualized environments, memory requirements can explode.

Therefore, concepts for the efficient use of storage as well as the transparent integration of different systems from different manufacturers into a complete system are required. These requirements can only be realized with storage virtualization, which abstracts systematically from the underlying hardware through an additional software layer. Accessive systems and applications are separated from the hardware. This has many important effects: setting up logical storage areas (LUNs), disk extensions, or migrating data without physical interventions, many management processes run without interruption. As with server virtualization, storage virtualization improves availability,

Vulnerable security : While overall operational security is ensured by implementing HA mechanisms and can be increased against purely physical servers, the additional hypervisor software layer potentially offers new attack areas for attacks on system and data security. What used to be sheet metal is now a pure data object: the former server is reduced to a file that can be copied, moved and deleted, which opens up new security problems. In addition to purely technical security gaps, management problems often lead to further holes in the security. Security and compliance specialists are often not involved in the design and implementation of virtualization setups in the boat. This continues into network management, Where different teams are responsible for setting up and managing the new virtual networks, in addition to the existing physical network structures, and inevitably enter the game. Administrators of the VMs are not necessarily the administrators of the server infrastructure; However, in the case of introduction projects, these rollers are often mixed lightly, so that parts of the security architecture implemented to date are thereby undermined.

Case 2: False and exaggerated expectations

The virtualization euphoria entices companies to use both the skills of the technology and the potential usability effects with missed and partly exaggerated expectations. Thus, when planning and implementing virtualized servers, the consolidation capabilities of the new infrastructure are often overstepped, which is partly promoted by optimistic manufacturers’ statements. In test setups and under laboratory conditions, up to 50 different VMs can run on a computer sufficiently quickly and reliably. In realistic production scenarios, significantly lower consolidation rates are achieved. A restriction to 6 to 8 VMs per computer is quite common for the operation of resource-intensive business-critical applications. If higher levels of consolidation are used in the run-up, there is a risk of cost explosion, since more computers must be deployed, which in turn require more storage, network capacity, licenses and administration.

Particularly at the beginning of virtualization projects, fewer and less critical servers are consolidated on virtual systems, which usually works well. Problems arise when, in future project phases, these ratios are transferred to highly charged mission-critical systems with possibly high demands on network bandwidth. This leads inevitably to blatant bottle hacks.

As a rule of thumb, virtualized computers should be designed so that a maximum of 60 percent of the physical resources are loaded. This still achieves high consolidation rates, but also has sufficient reserves at the same time. Accordingly, increased costs are to be planned for this.

Project killer server and network performance

Due to the additional retracted soft-ware layer, virtualization results in a reduced performance in computing performance and network throughput. In well-designed infrastructures, in which the technical components are optimally coordinated with each other, this is not or hardly noticeable. However, if there is an outlier in only one of the many components, this can adversely affect the performance of the entire environment, making it impossible to use under productive conditions. Problems arise primarily with the network connection as well as with the memory I / O. Above all, the storage system must be performance-optimized for the virtualized environment. The access pattern of the virtualized environment to the memory is a central factor. This is usually random I / O. For this, the transmission bandwidth is not critical, but the number of possible input / output operations (I / O) per second.

In addition, not all workloads and servers are suitable for virtualization. Virtualization candidates should choose project managers as well as their technical abilities, especially in terms of performance. If a physical server is already more than 50 percent CPU-intensive and requires more than 6 or 8 GB of RAM, it is likely to be a candidate because the virtual environment can never be faster than the underlying physics. Moreover, the consolidation goal can not be met. The same is true for high I / O utilization.

Virtualization: Does the manufacturer not help or help?

Especially virtualization beginners can be surprised by the fact that many software developers still handle the support of their solutions in virtual environments very restrictively. Some players, such as SAP, IBM, or Oracle, officially support the market-leading hypervisor from Microsoft , VMware, and Citrix (Xen) to run their products, while other virtualization providers remain outside. Some vendors in the support case claim that a “unknown” problem is initially mapped to a physical server and support is provided, If there is no connection with the hypervisor. Oracle is particularly piquant about this – the database producer merely assumes its own hypervisor “Oracle VM” from this constraint.

The “S-Question” knows other game types: With some systems, including some common Linux distributions, the technical support is out of the question, but entails additional costs: Because you can create free of charge additional instances of its virtual server, then But for each additional server – be it physical or virtual – for the manufacturer’s support.

Case 3: Lack of technical know-how

At the beginning of virtualization projects, the complexity of the administration and the complexity associated with the technical environment are often underestimated. If there are shortcomings in the detailed knowledge, problems and incalculable risks are pre-programmed. The technique keeps enough fall cords ready for those who underestimate the details. For example, virtualization comes from the ability to move VMs back and forth between computers. However, the prerequisites for this are numerous: the participating computers must be equipped with the same hypervisor, linked to a pool, and connected to a network storage. In addition, the CPUs of the participating computers must be “identical”, ie at least belong to the same family. In this case, the subsequent expansion of pools can be detected by additional computers: If the processors are only slightly different in the same computer model, then the hypervisor may refuse the motion process of the VM.

The dear emergency with the backup

As a study by Kroll Ontrack shows, data is often lost in virtualized environments. The cause of this was in most cases human errors such as accidental deletion of VMs. Hardware defects are the cause in a quarter of the errors – which shows that backup and DR (disaster recovery) procedures are not yet established in virtual environments. Due to a lack of knowledge, the requirements for the backup of VMs and their restoration are often underestimated. These are in parts different than in purely physical setups, so the processes for backup, recovery and disaster recovery need to be redefined accordingly. It is important to know, for example, that backup operations can lead to resource bottlenecks as virtualized servers and the relevant network paths are more heavily utilized.

Case 4: Software Licensing and Virtualization

The migration of existing servers and applications into a virtual environment often leads to additional licensing costs or at least changed licensing conditions for both operating systems and application software. Here, inexperienced virtualization beginners may be in the cost, but at least in the compliance case.

Two licensing models are widely used in the software world: the calculation of the costs by the number of processors and the coupling of the license to a specific computer. Both approaches are often obsolete on virtual systems. Usually, several processors or cores are used in the virtual server. On the other hand, it is precisely a utility aspect of virtualization that makes it possible to move VMs back and forth between individual servers, whether it is to load distributions or to perform maintenance tasks without interruptions.

Nonetheless, several virtual machines may share a single processor and the user still has to pay the full fee per processor. Or a VM is only active at certain times and does not run in the rest of the time. In many of these cases, the user pays “too much”. On the market, no uniform, virtualized, and user-friendly model has been established. The software industry is currently still trying on different approaches, such as pay per use.

It is important for IT executives to keep an eye on the scaling effect: if the number of virtual machines increases over time, the cost increases linearly if the full license has to be paid for each VM. It is a good idea to find a license model in which a larger or even unlimited number of instances of a system can run on a server. For example, it may be worth buying a Windows 2008 server instead of buying the Enterprise version only once, since it can be used in up to four virtual machines at the same time. The datacenter version allows an unlimited number of instances of the operating system in virtual environments (see:

The Microsoft Windows Server 2008 R2 Datacenter Edition shows very clearly that the license devil is in the details : the limitation of the limitation applies only with a correct licensing. This requires the purchase of one license per CPU socket. Note the minimum number of CPU licenses per server. A server with a six-core processor requires two licenses for Windows Server 2008 R2 Datacenter Edition. And – beware – a second processor of the same design, since an installation may only be carried out on a computer with at least two sockets. If such Windows Server 2008 VMs are moved to a different host, the target host must also have the appropriate Datacenter licensing.

A further effect is the extended technical possibilities of the virtualization environment. If a customer is running a hardware-based server with a 32-bit architecture and wants to use the advantages of 64-bit architectures as part of the migration to the virtual world, He usually re-enters the pocket for a new operating system license. This applies, for example, to Windows servers, even if more memory than 4 GB of RAM is to be used – then the purchase of one of the Enterprise variants of the operating system, which can drastically increase the project costs.

Licensing applications

The licensing models of applications and infrastructure components such as databases are often still oriented to the world of physical computers and have not yet been adapted to the requirements of the dynamic virtualized environments. For users of server virtualization, this leads to an unexpectedly high cost factor, in addition to complex licensing conditions and the lack of an overview. Problems arise mainly because of the tight binding of the licenses to the underlying hardware, in particular the CPU performance. This can lead to difficulties if only parts of the physical capacity are to be used and licensed.

Example 1: If an application is used in six virtual machines of a server with Quadcore CPU, then up to four or six licenses are required for the software, depending on the license conditions.

Example 2: For disaster recovery or backup purposes, clones of virtual machines are preferably created and stored offline. The use of a backup software would require a license for each of these VMs depending on the license model, which can become a real cost factor. However, the software manufacturers gradually adapt their models to the new circumstances. For example, Acronis has a license for its backup solution, which allows the backup and recovery of up to 99 VMs – at a fixed price. Prerequisite: The VMs must be on the same physical machine.

A further problem can arise because a power limiter of a VM is often not or not sufficiently possible. If an application is operated in a VM that is much too large, this can make maintenance and support contracts considerably more expensive.

Licensing of virtualized databases

The virtual conflict between manufacturers and their customers is particularly evident in the licensing of databases on virtual servers. Unless hard-partitioning is followed (this requires a segmentation of the server on a few certified hypervisors), Oracle applies soft partitioning. This means that all physical processors or their cores that are present in the server must be licensed. This is true regardless of how many CPUs actually address and use the VM in which the database is running.

IBM uses a similar model, but the soft-partitioning is based on the IBM License Metric Tool to determine the maximum actual process usage by the database and only license it.

Microsoft differentiates Server + CAL licensing as well as processor licensing. With the first variant, the user licenses the users or devices via CALs (Client Access License) as well as the necessary number of server licenses. In the second model the VMs are counted. SQL Server Standard Edition requires a server license per virtual environment. With one license, up to four virtual environments can be run within a physical server environment.

In the processor model, licensing is based on the physical CPU cores or on the basis of the virtual CPUs (vCPU) used by the VMs. The Enterprise Edition requires a minimum of 4 cores per processor. If all the cores of a computer are licensed, you automatically acquire an unlimited number of licenses on that host. Alternatively, customers can license the virtual CPUs of an SQL server VM, but also here at least four of them! Here, users run the risk of paying licenses they do not need because many SQL server installations do well with 1 to 2 cores. In addition, a Software Assurance (SA) contract is required to enable VM mobility (vMotion / Livemigration) more than once every 90 days.


The benefits of virtualization can only be reaped at all levels by companies when they are optimally deployed. In many cases, the IT processes are not designed to meet the specific requirements of virtual infrastructures. In addition to the technical infrastructure, IT managers should ensure new thinking processes, corresponding know-how and awareness. Organizational structures must also be adapted to the new requirements. (wh)

Checklist license costs and virtualization

  • Do not underestimate the complexity of this topic and include the indirect licensing costs of the principal in the planning.
  • Check whether additional or extended licenses are required. Observe all system levels: operating system, infrastructure, applications.
  • Be prepared for larger restrictions or costs with older operating system versions as well as applications.
  • Keep in mind that the additional capabilities of the new platform (RAM, CPUs, distribution) are partly accompanied by additional costs.
  • Note that pay-per-use models are difficult to predict and control costs.
  • Request licensing without mobility restrictions for the VMs.
  • If possible, choose a licensing based on Named Users instead of based on the processors used.
  • Set to a central license management. Use a software asset management tool to monitor and optimize the licensing of all machines.

Microsoft also provides patches for XP and Vista

During the update, Tuesday in June Microsoft provides security updates against nearly 100 gaps. Also for Windows XP and Vista is something for the occasion. WannaCry and the NSA-Leaks make it possible.

The Patch Day on June 13th was roughly the same as the two Update Tuesdays in April and May together. Overall, Microsoft 95 closes security gaps, as does the update for the integrated Flash Player, which eliminates nine further vulnerabilities.

This also gives users of Windows XP and Vista security updates this time to better protect the computers against attacks using published exploits. In addition to WannaCry (or WannaCrypt), other pests are now known, which spread with the help of the attack codes, for example, the worm EternalRocks.

Several state facilities still use old XP systems. These are not necessarily connected directly to the Internet, but they are probably on the internal network. If a worm is first on the LAN, he can also infect these computers.

So, if you still keep old babies running, you should take the opportunity to get a (probably last) time security update.

Microsoft has clarified that this will remain an exception. Microsoft has published a “Senior Platform Guide” in Security Recommendation 4025685.

This is where the vulnerabilities are listed, which Microsoft will now also fix in the Windows versions, which have long since fallen out of the support.

This includes Windows 8.0 and Windows Server 2003. This guide also contains an explicit indication that the updates are not checked,

But now to the security gaps in new Windows versions (from Windows 7). For this, Microsoft has a similar guide ready. But whoever always uses the automatic Windows Update does not have to worry about it.

The updates are installed without any further effort – including occasional problems. If you are looking for detailed information about the security updates, you need to rummage through the unclear Security Update Guide since April.

Internet Explorer

A new cumulative security update for Internet Explorer 9-11 includes six vulnerabilities, three of which are critical to Microsoft.


In the new browser edge, Microsoft removes 17 vulnerabilities, ten of which are classified as critical. Three non-critical gaps (CVE-2017-8498, CVE-2017-8523, CVE-2017-8530) were already publicly known prior to Patch Day.

Two of these gaps can allow an attacker to bypass security functions. This applies in both cases to the so-called Same Origin Policy, which is supposed to prevent script code in a web page from manipulating the HTML content of a foreign page.

Attacks that would exploit these vulnerabilities are currently unknown.


Distributed over nearly all the Office family counting products, Microsoft closes security holes. This includes, for example, SharePoint Server, Word Viewer, Outlook, or Skype for Business. Three of these vulnerabilities are classified as critical by Microsoft.

Additionally, Outlook users should consider the vulnerability CVE-2017-8507. Microsoft does not classify them as critical, but an attacker who uses them using a specially crafted mail can gain control over the system if a recipient opens the mail in Outlook.


Several dozen vulnerabilities are distributed over Windows 7 to 10 as well as Server 2008 to 2016. Two so-called zero-day gaps should be treated with highest priority; They are already actively used for attacks. This is a gap in the Windows search service (CVE-2017-8543) as well as another LNK gap (CVE-2017-8464). Both allow an attacker to inject and execute code.

The search service gap can also be used remotely via SMB in a company network. The LNK weak point is another link in a chain of similar gaps, such as Stuxnet. Also, CVE-2017-8527 deserves increased attention. This is a critical vulnerability in the Windows fonts library. It can be used with specially crafted fonts. For this purpose, it is sufficient for a user to visit a web page using such a font file. The attacker could then gain control over the visitor’s system.


The new Silverlight version 5.1.50907.0 eliminates two critical vulnerabilities. One is the CVE-2017-8527 just mentioned, the other (CVE-2017-0283) is once again a Uniscribe weakness. Both can be used in a web scenario to inject and execute code.

Flash Player

For the Internet Explorer (from Windows 8) and Edge integrated with Flash, Microsoft is an Adobe update. It removes nine security categorized as critical. The new Flash Player is version number

Finally, as in every month, there is also the Windows Malicious Software Removal Tool in a new version.

Google Drive to Overall Back Up Your Computers

Surely you are familiar with Google’s Google Drive service that provides cloud storage, where the service is very helpful for users to store our important data.

But each Google Drive account is limited to a few giga bytes only. If you want more cloud storage to backup all the data on your computer, then it will happen soon.

According to Google’s post on the G Suite Blog site, Google Drive will get a new app in the form of Backup and Sync.

From that post, we can conclude that Backup and Sync will replace the standard Google Drive and Google Photos Backup apps for PCs and Macs, but the current Drive app will still exist if it’s a user choice, while Backup and Sync will be a separate download.

“On June 28, 2017, we will launch Backup and Sync from Google, this tool is meant to help everyday users to back up files and photos from their computers, so all files are safe and accessible from anywhere. Backup and Sync is the latest version of Google Drive for Mac / PC, which now integrates with Google Photos desktop uploader. ”

According to Google, the Backup and Sync service are intended for general consumer users, and for those who focus more on the company must keep using Drive until the official File Stream Drive service is released by Google.

Acer Prepare Desktop PC Aspire GX-281 with the Ryzen Processor

Although still in preparation for the launch, Acer reportedly has already revealed the existence of the latest gaming desktop PC that will be built with the reliability of AMD Ryzen processor that is intended to strengthen the Aspire product line superior, the Aspire GX-281.

The Taiwanese manufacturer features Acer Aspire GX-281 at its booth during Computex, with its newest gaming desktop PC powered by the AMD Ryzen 1700X processor and collaborated with 32GB DDR4 memory and also a choice of AMD Radeon RX 480 or Nvidia GeForce GTX 1070 graphics cards.

Asus G11DF (Ryzen’s first Asus desktop), the entry of the Radeon RX 480 GPU is slightly below expectations, considering that now AMD Radeon RX 580 is a newer GPU.

The full specification of the Acer Aspire GX-281 is not available so far, but based on the unit exhibited at Computex, it is most likely one of the most premium versions of today’s desktop PC gaming.

Acer Aspire GX-281 looks just like any other Aspire desktop PC from Acer, relying on a black chassis with a red accent on its side. The top of the device has a panel that can charge smart smartphones.

While the price and availability of Acer Aspire GX-281, unfortunately has not been revealed by Acer so far.