5 tips for administrators hosting professional websites
Below are 5 aspects of webhosting that I feel anyone who is working on professional websites, needs to be aware of:
1. Forget 100% uptime
Just about all hosters offering a fantastic uptime: at least 99, 99 x percent! The more nines, the better. And Yes, such a thing gives a lot of confidence on. Realize, however, that uptimebeloftes especially … well, promises are. Ultimately, the practice the only measure. And believe me, downtime is something hosters preferably as little as possible about communicating. Some downtimes are not even noted, not by the hoster nor by the customer, for example, because they take place at night, or because they are too short and/or occasional.
The by a hoster, you need to know to correctly interpret uptime promised. A hoster allows nothing more than that (and only) infrastructure will be available during the promised minimum time. Still sounds good, doesn’t it? But you translate that to the needs of the customer, then State there actually:
… This is not to say that your website also, xxx 99 percent of the time is online!
It can perfectly that position from infrastructure (network, servers, even the Web server software) everything is working, but that a website not (good/fast) loads. For the hoster is nothing to worry about, but you know is now with the baked pears. There you are with your uptime. Hosters monitors in the first place the status of the underlying infrastructure, and not whether that one important order page of your webshop loads.
Sorry, does not count
And then there are the many exceptions. Things that a typical web hoster not considers as being under the promised uptime, but that their impact. Just think about whether or not planned maintenance on the infrastructure, human errors (also by the customer!), unexpected peaks, external causes (network routing) and force majeure (an attack or accident). All things that generally will exclude a hoster in calculating the actual uptime. Besides, without first an expensive lettuce (see point 2), you will rarely, if ever, being entitled to claim any compensation if the uptime is not met.
Smart webmasters with self monitoring tools from one or more remote locations the continuous operation of the website. I myself use, among other things, Pingdom, uptime-that genuine added value with its convenient and the possibility of loading times of Web pages. Armed with that kind of hard data you can easier to your hoster steps to availability problems.
2. An SLA is useless. Until …
The famous Service Level Agreement or SLA. The solution for all your uptime-problems according to the uninitiated.
How many times I’ve had not a potential customer on the line who wanted to ride at his product. Because management that demanded. And on the other hand kudos, because a SAVE is definitely something to think about … as long as you know what you are talking about.
An SLA formalizes the expectations (uptime) and procedures applicable at problem situations, and any damages that result from there. So some kind of insurance, and for mission-critical e-commerce applications a must. But in the end is for a SAVE the same as for uptime: many promises, but only if it seriously goes wrong do you know what a hoster really worth.
Only when it seriously goes wrong do you know what a hoster really worth.
Operators of websites I can only advise to beside the lettuce, at least as good to think about issues such as business continuity and disaster recovery. What if the worst happens? You will be amazed at the number of possible disaster scenarios and their consequences. And let the hoster feel free to respond to some of those scenarios, and demand for its disaster recovery plan as he has.
By the way I’m going there in all of this that the hoster have actually been Affairs if it arrives on infrastructure, and you as a customer can count on a stable, high-performance and secure environment. Dare by questions about the underlying infrastructure where your Web applications on hosted. How transparent the hoster for the day comes, with or without a confidentiality agreement, the better.
3. Concern themselves for backups
Data is sacred. Delete so never dates. And when the day comes that you need data from the past … then what? Do you have back up? No?! Your hoster? Yes, whew! But: how old are those backups? And what does it cost, both in time and money, in order to get back working?
When I was editor for Super Mario World revived was, I wrote about every the month on how to make backup copies. Useful info, no doubt, but at the same time I realized that only a minority himself to something as banal as backups cares about. And with the advent of the cloud, you have no more need for backups?
With the advent of the cloud you have no more excuses for no (additional) backups. Cheap storage services there are plenty. And much is to automate. I can therefore only advise to the hoster itself in addition to the backup services that offer, how good and reliable it may be, still a private backup to provide … just in case. I myself use Managewp.com for this purpose, that not only WordPress backups and this on Google Drive or Dropbox saves, but also the sites and plug-ins can update. Whatever may happen to my data, I always have a (second, third, and eighth) version available on their own independent location.
4. Your webdeveloper (probably) sucks
In tip 1 about the uptime I have already cited, but I still like the bother to get it to go into. When a website goes down, is that very often because the server no longer “handle”. The remedy is to simply more memory and/or more computing power to provide … to the server next time a year.
A Web host you will be only too happy to sell a heavier server. Because there life they end up for a part of it.
Heavier (and more expensive) servers are only one part of the solution.
A common complaint on the Helpdesk of a hoster: “my website is slow!”. Of course, everything starts with a good sophisticated hosting platform and sufficient resources, but very often it turns out the website itself just to log to work (e.g. TIG plugins and modules), and drops out will fix everything as soon as the number of simultaneous visitors increases. Also inefficient databasequeries are a daily occurrence. Something brands website owners only when the customers suddenly attracts more people, and then it is often too late to adjust the code substantially. And so we are back to square one: fast bijprikken some extra server resources, and we can be again, right?
In my career I have, however, often hosting the opposite dare give advice: instead of more expensive servers, stick your money better (or: first) in a Web developer who knows how to make a smart, resource-efficient, and thus scalable application builds. Many factors play a role, and it occurred to me that a lot of web builders have neither the experience nor the expertise to the scalability of a website in the longer term. By the way, will soon recommend to you a hoster to use object caching techniques where possible when a website is performing poorly. But also there is the necessary knowledge needed.
My advice: let in the specifications or quotation of the website detailed the maximum loading time at a certain number of simultaneous visitors. So are you sure from the beginning that the developer attention to optimizations.
5. Own country first
You’d think that in a globalised market such as that of the cloud, the physical location of the hosting is not very much. The opposite is true. So is a Belgian website hosting in an American data center not optimal, unless you have a lot of American visitors. Data communication may be to speed of light happen, with such a large physical distances between stations, and of course there are more that has delays as a result.
Some hangs off of the network connectivity that the hoster offers, but the overall rule remains that the bigger the physical distance between the server and the visitor, the slower the network connection responds. Certainly in mission-critical environments counts every millisecond. Fortunately, for international pressure to visited websites much fixing, just think of setting up a Content Delivery Network (CDN).
In addition to connectivity also plays the storage location of data an important role in the choice of cloud provider. Where your data are physically located, it is not entirely clear, let alone any hoster who can all access it (can) get (t) (s).