Pages

Kamis, 28 Februari 2013

Screenshots: Five mostly obscure desktop backup tools


Backup Maker 1

It's simple: If you're not backing up your data, at some point you're going to regret that mistake. For many medium to large businesses, data is typically backed up via shared directories on a server. But for smaller companies, or end users who have needs outside of shared directories, it's nice to know there are backup tools that can be installed, free of charge, and can handle one, simple task: Backup your desktop data.
I'm not talking about applications with bells and whistles to suit every need. What I'm looking for are applications that can do one job and do it dependably. In my quest to find a backup tool to meet these needs, I came across five that could happily recommend. Let's take a look at these tools and see which, if any, will do the job you need done.
a1_backupmaker_1.png

Five Apps

There is one caveat to some of these tools - for a few, the free version is assumed for private use only. The business versions of the same tools can be acquired, for a small price.

1. Backup Maker

Backup Maker is one of those tools you need if what you're looking for is simplicity and security. Backup Maker handles your desktop backups with an interface that nearly anyone (with any level of experience) can use.

Backup Maker 2

This tool easily handles compression and even offers strong encryption (AES 256-bit). Backup targets can be anything from USB drives, FTP (passive or FTP over SSL), or CD/DVD. Backup Maker even supports spanning backups (splitting larger backups into multiple files). The personal edition is free. If you need to dive in for a professional license, it will set you back $66.63 USD. Backup Maker works with Windows XP, 7, Vista, and 8.
a2_backupmaker_2.png

2. Genie Timeline 2012

Genie Timeline 2012 has one of the most simplistic interfaces you'll find (bested only by the Linux-only Deja Dup) and is about as close to 'set it and forget it' as any backup can be. Although the free version of Genie is very limited in scope and feature, it will reliably backup desktop data with just a few, quick clicks.
b1_genie_timeline_1.png

Genie Timeline 2

One of the nice features of Genie is that it can back up both unlocked and locked files (though I wouldn't depend upon a tool like this for a machine that runs a local database, such as MySQL). Genie Timeline offers an incredibly easy way to exclude files – called the No Backup Zone. Simply drag and drop files into this folder and they will not be backed up. Genie Timeline is available for Windows XP, 7, Vista, and 8.
b2_genie_timeline_2.png


FBackup 1


3. FBackup

FBackup is a nice and easy backup, with minimal features and maximum reliability. You can set FBackup for hourly, daily, weekly, and monthly intervals. The major limitation with Fbackup is that you do not get an option for either incremental or differential backups – all you get is full or mirror
c1_fbackup_1.png

FBackup 2

Probably the one feature that won me over to FBackup is the application specific plugin. What the developers have done is set up plugins that enable quick and easy backups for popular applications. For example, there is an Email plugin that will backup popular email applications (like Thunderbird and Outlook). This feature should win over anyone that doesn't want to spend a great deal of time setting up backups. FBackup is available for Windows XP, 7, Vista, and 8.
c2_fbackup_2.png


Lucky Backup 1


4. LuckyBackup

LuckyBackup is the first of the Linux backups on the list. This is my personal backup solution of choice. Not only is it incredibly easy to use, it is also as flexible as the platform it backs up.
d1_luckybackup_1.png

Lucky Backup 2

Lucky Backup features: Backup or syncing directories; create snapshots of data; test-run system; exclude system; add/remove rsync options; execute user specific commands upon successful run; easy restore; and much more. Lucky Backup does not include its own scheduler, but works with the Linux cron system to create scheduled backups. With Lucky Backup you can create different profiles, so you can group backup jobs together for granular setup. Luck Backup runs on most all modern Linux systems.
d2_luckybackup_2.png


Deja Dup 1


5. Deja Dup

Deja Dup is the aforementioned backup with one of the minimal interfaces you will find on an application. Although Deja Dup offers an incredibly simple interface – it does offer plenty of features. With very little setup, you can have your data backed up to an attached drive or a cloud service (such as Amazon S3, Rackspace Cloud Files, and Ubuntu One). By default, Deja locally encrypts and compresses your data and does incremental backups. If you're looking for one of the easiest ways to back up your Linux desktop data, you will not find an easier option than Deja Dup. In fact, you'd be hard-pressed to find an easier solution, regardless of platform.
e1_deja_dup_1.png

Deja Dup 2


Bottom line

If you have to ask the question "Should I be backing up my data?" you are in the wrong industry and should return to using tin cans and stone tablets. The ultimate question should not be if you'll lose data, but when you'll lose data. Even if you're shared drives are backed up on a server, you might need to backup specific local directories – or your business is a one man band in a home office and your budget for such software is next to zero. No matter the case, give one of these solutions a look and see if it will handle the task at hand.
e2_deja_dup_2.png

Alpha Review: BitTorrent SyncApp


Takeaway: BitTorrent’s SyncApp experiment looks rather promising and makes setting up a personalized cloud for your files easy.
Cloud file services definitely are a convenient way to keep backups of all your important documents, pictures and other files. Usually, you sign up for an account, then drag and drop, letting the copy operation take over from there. Despite the ease of use and convenience of such services, a few other points do arise along the way that give us pause, such as whether aforementioned files are secure and if no one managing the server peeks in and looks at the files. With most of the services being free to use, sometimes additional stipulations are added into the service agreements that allow “targeted” advertising and no liability for break-ins.
Personally, I can’t see a reason why one cannot create a personal cloud of their own so as to avoid these thorny issues. BitTorrent, Inc. has a little app in the works called SyncApp that should ameliorate those concerns and the future is looking bright.

SyncApp

SyncApp is quite simply a means to create that personal cloud for all of your files. It is currently undergoing restricted alpha testing at this time under the BitTorrent Labs banner, but I have been able to procure an invite to the program for review purposes. Without a doubt, I do like what I see thus far.
SyncApp goes to work, swiftly tackling files

Product Information:

  • Title: SyncApp (Alpha)
  • Author: BitTorrent, Inc.
  • Product URL: http://labs.bittorrent.com/experiments/sync.html
  • Supported OS: Windows XP, Vista, 7 and 8, OS X and Linux (x86 and ARM flavors)
  • Price: Freeware
  • Rating: 4 out of 5
  • Bottom Line: BitTorrent’s SyncApp experiment looks rather promising and makes setting up a personalized cloud for your files easy, giving you full control without relying on third-parties, like Dropbox.
Essentially, you install the SyncApp on all your computers you want to include in your cloud, whether it be on a Windows, OS X, or even a Linux box. You choose the directories you would like to sync, each with their own special sync passphrase or “secret” that allows you share these locations amongst your allowed machines. Once a computer has your secret and a destination selected, SyncApp will begin to pull any data from the origin computer over the BitTorrent network.
Because of the nature of BitTorrent, the more machines your pool together, the more consistent your speeds will become. Also, in the event of a node going down, your other computers in the self-created mesh will pick up the slack. This is why it’s important to add a few extra machines in addition to the primary ones you wish to sync, so as to provide that extra level of redundancy.
Setting a secret on the system in order to sync files back and forth
After playing around with SyncApp, I did notice a strange behavior bug that, although completely expected for an alpha version, will hopefully be addressed before this software is released to the masses. When I synced a directory between two different machines, blew away the files from the originating system and added fresh files, SyncApp seemed to get confused and didn’t accept new transfers without first restarting the app on both ends. Once I did that, SyncApp was able to resume normally. This could perhaps be a network peering glitch.

Bottom line

All in all, the very concept of a personal cloud for your files that you are in complete control of is a very welcoming thought. No longer are you reliant on a third-party like Dropbox or MEGA, and you won’t have to worry about servers going down or being compromised by hackers. If you properly harden your own nodes for syncing, you can have a very secure and reliable means of mitigating a file-loss disaster. Just be sure to have BitTorrent type activity allowed on your network or get permission from your IT department before deploying SyncApp.

Secure your personal cloud information with BoxCryptor


Takeaway:  Takes a look at BoxCryptor for BYOD tablets. This encryption software is optimized for securely connecting to cloud services, including Google Drive, Microsoft SkyDrive, and Dropbox.
BYOD is all the rage. Users are bringing their tablets, smartphones, and laptops to use at work. This new approach has a few issues to surmount, and one of the biggest is security. Yes, it is crucial for the company to ensure that their data is secure, but it’s equally crucial for the user to be able to secure their personal data away from prying eyes of the company and fellow employees.

Personal data includes cloud drives like Google Drive, Microsoft SkyDrive, and Dropbox. If you connect your tablet to those services, your cloud data is ready for prying eyes — unless you make use of a handy tool like BoxCryptor for Android or iOS.
BoxCryptor is free encryption software that’s optimized for these cloud services and features the following:
  • Access to all encrypted files, photos, and music in your Dropbox, Google Drive, or Microsoft SkyDrive
  • Encryption and decryption takes place directly on device (password is never transmitted)
  • Secure storage uses AES-256 standard
  • App unlock PIN for additional security
  • Limited EncFS compatibility
  • Filename Encryption (in desktop client only)
Let’s walk through the process of installing and connecting BoxCryptor with Google Drive on an Android tablet.

Installation

As you might expect, BoxCryptor is easy to install. Just follow these simple steps:
  1. Open the Google Play Store (or the App Store for iOS devices)
  2. Search for “boxcryptor” and tap on the entry for the app
  3. Tap Download
  4. Tap Accept & install
When the installation is complete, you should find an icon in the app drawer and/or on the home screen. Tap that icon to begin the process of connecting BoxCryptor.

Connecting to your account

When you first launch BoxCryptor, you’ll be presented with a screen that asks you which service you want to connect to (Figure A).
Figure A
BoxCryptor on a Verizon-branded Samsung Galaxy Tab.
Since we’re connecting to Google Drive, tap the listing for this service. When you do this, a new window will appear, asking you to allow or deny access. Tap Allow Access, and BoxCryptor will begin the handshake process of connecting with your Google Drive account. At this point, you should see a listing of your folders (Figure B). You can now navigate into those folders safely.
Figure B
Here’s what your folder listing will look like within BoxCryptor.
This, of course, isn’t terribly safe, as anyone can open BoxCryptor and see your files and folders. To prevent this from happening, set up PIN access to the BoxCryptor application by following these steps:
  1. Open BoxCryptor
  2. Tap the menu button (three vertical dots) in the upper right corner
  3. Tap Preferences
  4. Tap App Unlock (Figure C)
  5. Enter a PIN
  6. Re-enter the PIN (for confirmation)
Figure C
You can also define the cache size for BoxCryptor in the Preferences settings.
That’s it! Now, anytime BoxCryptor is launched, it will require that PIN before the app will open.
The one caveat to BoxCryptor is that you can only link the app to one service at a time. If you use multiple services on your tablet, you’ll have to unlink one to link another.
You can create new, encrypted folders within BoxCryptor as well. You’ll notice the Advance tab with Filename Encryption. This feature is actually not available for the mobile application (only the desktop version). To create the new folder, follow these steps:
  1. Open BoxCryptor
  2. Make your connection, and then tap the folder icon in the upper left corner
  3. In the New Encrypted Folder window (Figure D), give the folder a name and a password
  4. Uncheck the Save password checkbox (for added security)
  5. Tap the checkmark in the upper right corner
Figure D
The password is not optional — it must be created in order for the new folder to be encrypted.
In order to view the contents of the newly created folder, you’ll have to enter the password. You won’t have to enter the password, however, if you’re viewing your account online (for example, viewing Google Drive from within a desktop web browser).
One strange behavior of BoxCryptor is that there isn’t a Back button. Once you’re inside a sub-folder, in order to get back to the parent folder, you have to do to the following:
  1. Tap the menu button
  2. Tap Preferences
  3. Tap Change source folder
Once you’ve done that, you’ll be in the parent folder.
BoxCryptor is a great way to help secure your Google Drive, Microsoft SkyDrive, or DropBox content on your tablet. This can help make the transition to BYOD much more secure. Give it a try and see if it doesn’t help make your IT department breathe a sigh of relief as BYOD takes over your organization.

The CloudOS arrives: Map your cloud journey with real tools


Takeaway:  Lays out the benefits of Microsoft’s CloudOS, which consolidates all the tools needed to manage a modern hybrid cloud platform.
With System Center 2012 Service Pack 1Microsoft delivers the CloudOS. A new label that may take off — CloudOS refers to viewing all the parts of a modern hybrid cloud as one platform. This means thinking of your on-premise Microsoft private clouds, partner-hosted services, and Microsoft-hosted public cloud subscriptions as a single resource pool for running your enterprise software workloads in their most economical and best-performing locations. System Center 2012 Service Pack 1 (SP1) adds significant cloud integration to every component in the System Center suite.
The concept of CloudOS is that you have greater freedom to architect solutions that take advantage of the geography and economies of your business, partners and suppliers, and customers. You can locate application components where they cost less and/or perform better:
  • For scaling out and right-placing: If you don’t already have a private cloud resource in the right place or at the lowest price, you can locate a partner in the appropriate region or industry specialization, or subscribe to public cloud services that match the requirements.
  • For getting started: Organizations just beginning a cloud journey should take a look at CloudOS (Windows Server 2012, System Center 2012 SP1, and Windows Azure) as a platform that can safely and confidently transition some or all IT services to the cloud. The cloud journey can be on a comfortable, even extended, timetable that is driven by evolving business needs, not by the priorities of legacy IT infrastructure investments.
An economy of scale occurring is that Microsoft is sharing lessons learned in Windows Azure in “trickle down” fashion. That is, the scripts (like PowerShell), and user interfaces (like the Azure Service Management Portal) that manage millions of Windows servers in Azure very efficiently can deliver the same efficiencies, just on smaller scales, in your data center or a service provider’s — or both!. You can essentially create a “mini-Azure“on premise, or share a partner’s “mini-Azure”, or subscribe to pieces of the “real Windows Azure,” and use the same tools across all three clouds.

Connecting the clouds: From the application up

A challenge for enterprises trying to plan long term IT strategies these days is the difficultly in forecasting a solid five-to-ten year roadmap that transitions from a traditional on-premise IT plant to an optimized hybrid cloud environment. Most organizations understand by now that unless they begin to adopt appropriate cloud technologies, they may be at a disadvantage in the global marketplace to competitors that successfully leverage cloud economics.
With CloudOS, Microsoft is creating a tipping-point moment that could accelerate cloud adoption in many companies. There are several components to this success story:
  • Microsoft “owns” the whole CloudOS stack, i.e., Windows Server + System Center + Azure.
  • System Center 2012 SP1 components create sticky application-layer attach points to the cloud.
  • The cloud migration journey need be neither all-or-nothing nor arbitrary: the migration can be modular and application-driven.
  • Service providers to populate the hybrid cloud ecosystem have it easier, being able to start up hosted services using lower-cost commodity hardware and at smaller scale.
Each of these components is unique in the industry and they combine to create something tangible: Stepping-stones for an organization from where it is today to an eventual optimized cloud environment, either hybrid cloud or all-public cloud. I heard a CIO remark that CloudOS was real enough to chart out a 10-year roadmap for their company — this was the first set of comprehensive cloud tools and technologies that they could understand, see and touch, and confidently envision with.

Mapping essential cloud characteristics to System Center features

previously wrote about the U.S. federal NIST’s definition of the essential characteristics of a cloud. It makes sense to evaluate product features against business benefits that are backed up by process validation such as ITIL and the NIST definition. For example:
  • The business benefit is “avoid overcapacity,” such as purchasing more infrastructure than you need, than having an idle capacity situation.
  • The NIST essential characteristics that enable this benefit are”resource pooling” and “rapid elasticity.”
  • CloudOS expresses these characteristics in connectors between System Center Virtual Machine Manager (VMM), Operations Manager, and Orchestrator that can avoid overcapacity situations.
Here are a few specific features in System Center 2012 SP1 that power the CloudOS, mapped to NIST cloud essential characteristics:
  1. On-Demand Self Service: App Controller lets users provision Virtual Machines and cloud services. Service Manager lets users open tickets which may invoke Orchestrator workflows that perform provisioning and scaling tasks.
  2. Broad Network Access: System Center 2012 SP1 Configuration Manager and Windows InTune interoperate for Mobile Device Management (MDM) and universal application distribution-supporting Android and Apple IOS devices alongside Windows PCs and Windows Phones. Microsoft identity management is open and includes Google, Yahoo, Microsoft ID, and Active Directory authentication providers.
  3. Resource pooling: VMM pools data center fabric for VM provisioning, Operations Manager pools management servers and gateways. Windows Server 2012 Storage Spaces pools disks of any type and from any disk controller.
  4. Rapid Elasticity: Provision or delete a VM in minutes. Add disks to highly available storage pools without adding array controllers. Extend on-premise network and compute resources to clouds for peak and burst capacity.
  5. Measured Service: VMM with Operations Manager produces charge-back reports on a per-cloud/per-service basis that let you quantify exactly how much a service is costing.

Microsoft’s secret weapon for renewed industry relevance

Once you become familiar with the features and capabilities of Windows Server 2012, System Center 2012 SP1, and Windows Azure, you can see a complete canvas emerge–upon which you can draw a low-risk, long-term, and high-yield strategic IT roadmap for an organization. CloudOS is a compelling concept because it has the appeal of predictability and positive ROI that are Microsoft strengths in the enterprise computing space.
That Microsoft is building a great product here can be seen in Microsoft’s Server & Tools division being the company’s fastest growing division Q4 last year. Internally Microsoft knows they are onto a good thing, late in 2012, they reorganized to merge the Windows Server and System Center technical teams. Breaking down organizational barriers between the server and the tools teams makes a lot of sense when you honestly want to blend the OS and the tools into a single entity–CloudOS.

Rabu, 27 Februari 2013

Manfaat Buah Ketimun | Timun Bagus Untuk Kesehatan



Manfaat Buah Ketimun | Timun Sangat Bagus Untuk Kesehatan Dan Kecantikan Tubuh Manusia - Pada posting kali ini Kolom Blog GRATIS akan membahas tentang Manfaat Buah Ketimun Untuk Kesehatan dan buah ketimun ( timun ) sangat mudah
sekali dijumpai di indonesia dengan berbagai variasi pelengkap seperti juz mentimun, pendamping
makanan ( pecel, nasi goreng, sego jotos, dll ) dan bisa sobat dapatkan di toko - toko pedesaan atau dipasar - pasar. menurut penelitian dibalik kesegaran mentimun bahwa senyawa buah mentimun banyak mengandung manfaat untuk kesehatan hingga kecantikan. Semoga pembahasan artikel kali ini bisa memberi tambahan wawasan buat sobat dan walaupun buah ketimun sangat banyak manfaatnya untuk kesehatan jangan pernah mengkonsumsinya secara berlebihan karena efeknya juga tidak baik bagi kesehatan tubuh, Berikut Beberapa Manfaat Mentimun Untuk Kesehatan Dan Kecantikan Menurut Penelitian Terbaru :



  1. Timun mengandung 96% air yang lebih bergizi dari air putih biasa, yang membantu tubuh tetap terhidrasi dan mengatur suhu tubuh

  2. Tempelkan
    irisan timun pada kelopak mata setiap 5menit selama 15menit untuk mengatasi sering lelah dan mengantuk

  3. Jus
    timun dapat melancarkan air seni

  4. Mentimun memiliki sifat diuretik ( efek pendingin dan pembersih ). Kandungan air yang tinggi ( vitamin A, B, C, magnesium, kalium, mangan, silika ) Masker wajah berguna mengencangkan kulit

  5. Asam askorbat dan asam kafeik pada timun dapat menurunkan tingkat retensi air sehingga mengurangi bengkak di bawah mata

  6. Kulit timun membantu menyembuhkan kulit terbakar sinar matahari

  7. Masalah pencernaan seperti mag, gastritis keasaman, tukak, dapat sembuh karena mengonsumsi jus mentimun. Kandungan serat makanan dalam mentimun dapat mengusir racun dari sistem pencernaan

  8. Timun dapat meredakan demam dengan cara ambil air dari perasan timun, Tempelkan pada bagian perut hingga demam mereda

  9. Kulit timun merupakan sumber asupan silika yang meningkatkan kesehatan persendian dengan memperkuat jaringan ligame

  10. Mentimun mengandung enzim erepsin, membantu pencernaan protein

  11. Timun mengandung lariciresinol, pinoresinol, dan secoisolariciresinol tiga lignan yang diteliti bisa mengurangi resiko kanker : kanker payudara, kanker prostat, kanker uterin dan kanker ovarium.

  12. Timun mengadung banyak air dan merupakan menu ideal untuk menurunkan berat badan. Kulit timun mengadung banyak serat

  13. Mentimun mengandung Potasium, magnesium, dan serat yang dapat membantu menjaga tekanan darah tetap normal. baik untuk mengobati tekanan darah rendah atau darah tinggi

  14. Biji mentimun dapat mengusir cacing pita di saluran usus dan juga bermanfaat sebagai antiinflamasi serta efektif dalam pengobatan pembengkakan selaput lendir ( hidung ) dan tenggorokan.

  15. Mentimun mengandung silika yang dapat mencegah pecah dan rusaknya kuku - kuku di jari kaki dan tangan

  16. Timun mengandung kalsium ( elektolit intra seluler ) merupakan bahan makanan ramah terhadap Jantung. membantu mengurangi tekanan darah dan laju detak jantung dengan melawan efek natrium

  17. Mentimun mengandung vitamin A, B1, B6, C, D serta folat, magnesium, dan kalsium, bila dicampur dengan jus wortel dapat membantu mengurangi nyeri sendi dengan cara menurunkan asam urat

  18. Penyakit pyorrhea pada gigi dan gusi dapat disembuhkan dengan mengkonsumsi jus mentimun. mengkonsumsi mentimun mentah dapat meningkatkan air liur berfungsi menetralisasi asam dan basa

  19. Timun bermanfaat bagi penderita diabetes karena mengandung hormon yang diperlukan oleh sel pankreas untuk menghasilkan insulin

  20. Kandungan air pada buah timun sebagai diuretik yang membantu pembuangan toksin ( racun ) dan limbah metabolisme dalam tubuh melalui keluarnya urin

  21. Mengkonsumsi timun secara teratur membantu melarutkan batu empedu atau batu ginjal.
    Mentimun membantu meringankan masalah kandung kemih dan ginjal karena air yang terkandung dalam mentimun membantu fungsi ginjal dengan melancarkan proses urinasi

  22. Jus mentium dicampur wortel, bayam, selada akan membantu penyuburan rambut. Senyawa silika membantu pertumbuhan rambut

  23. Peneliti menemukan bahwa senyawa sterol dalam timun dapat membantu menurunkan kadar kolesterol darah

  24. Timun dapat mengobati jerawat dengan cara menempelkan irisan mentimun pada daerah jerawat

  25. Timun bermanfaat untuk meredakan sariawan. Dengan mengkonsumsinya setiap hari dalam jumlah cukup banyak, buah mentimun memberi rasa dingin di rongga mulut ini mampu meredam panas, tetapi dikondisikan pada keadaan darah pada tubuh normal


Semoga posting artikel kali ini tentang Manfaat Buah Ketimun | Timun Sangat Bagus Untuk Kesehatan dan Kecantikan Tubuh Manusia bisa bermanfaat buat sobat semua untuk sedikit menambah wawasan, dan Jangan lupa sobat tambahkan jaringan teman sobat ( Follow Up KBG ) di Kolom Blog GRATIS,...


How SkyShellEx makes the SkyDrive cloud desktop client easier to use


Takeaway: If you’re looking for a simple way to expand the Microsoft SkyDrive syncing service, Jack Wallen says your best bet is to use the SkyShellEx user friendly app.
Cloud storage, and the ability to sync that storage to the desktop, makes professional and personal life so much easier. Not only do these services give you instant access to data on a remote machine, they let you access that data from any machine connected to the account. For many IT pros, the de facto standard has become Dropbox, but there are plenty of options out there.
If you’re a fan of Microsoft products, you might be familiar with SkyDrive. SkyDrive is a rebrand of Windows Live Folders and is now focused on matching Windows 8’s user interface. This doesn’t mean the cloud service will not work with earlier iterations of Windows; in fact, I have tested SkyDrive on Windows XP and Windows 7 with stellar results.
There are two types of cloud sync desktop applications: those that work with a “root” folder, and those that do not work with a “root” folder. Cloud services that work with a “root” folder basically sync the contents of that “root” folder and nothing more. Cloud services that work without a “root” folder are capable of syncing folders from pretty much anywhere on the desktop. SkyDrive is a “root” folder sync service. If you’re creative enough, you could create symbolic links from within the “root” folder that point to folders outside the sync folder, but I don’t think most users will want to tackle that process.
You can get around this little SkyDrive limitation by installing and using SkyShellEx. This tiny shell extension allows you to quickly create those symbolic links from the right-click context menu in Explorer. SkyShellEx is available for Windows Vista, Windows 7, and Windows 8 in 32- and 64-bit versions. Outside of a SkyDrive account, the only requirement is .NET 4. The one caveat to SkyShellEx is that it’s an “as is” software — there is no support.
You must have a SkyDrive account — this includes having the SkyDrive desktop client installed. During the installation of that application, you will be prompted to enter your SkyDrive credentials. You will also be prompted for what you want to sync. You have two choices (Figure A):
  • All Files And Folders On My SkyDrive
  • Choose Folders To Sync
Figure A
If you have a large amount of files, you might need to pick and choose what is syncd.
Note: With SkyDrive, you cannot sync folders that are shared. In fact, if you force that issue, those shares will be broken.
After you select what to sync, the client will be installed, and you will see a small cloud icon in your system tray. Right-click that icon and select Settings.
With SkyDrive properly installed, you can install the shell extension. Download the installer file and, once the download is complete, double-click the file to begin the installer wizard. With the installation complete, close out all Explorer windows and then reopen Explorer. Navigate to a folder you want to include in your SkyDrive sync and right-click that folder. The context menu should appear with a new entry: Sync To SkyDrive (Figure B).
Figure B
This is a much easier way to create symbolic links.
Select the Sync To SkyDrive option and the symbolic link will be created, connecting a folder outside of the SkyDrive “root.” You can include as many folders as you need, as long as you don’t go over your data limit on your SkyDrive plan.
If you want to stop syncing a folder with SkyDrive, follow these steps:
  1. Open Explorer.
  2. Navigate to the parent directory that contains the folder to be disconnected.
  3. Right-click that folder and select Stop Syncing To SkyDrive.

Conclusion

If you’re looking for a simple way to expand the Microsoft SkyDrive syncing service outside of the standard issue C:\Users\USERNAME\SkyDrive folder, you will not find an easier way to do so than with SkyShellEx. Instead of spending time learning how to create symbolic links within your Windows directory hierarchy, you can install this user-friendly app and make adding new folders to SkyDrive’s sync a simple right-click.

The myth of the always-on cloud



Takeaway:  Advises caution when considering uptime claims for cloud services and software. Realistic expectations will lead to a better experience when moving to the cloud.
One of the big promises of cloud computing is the idea of always-on. The cloud as a whole - infrastructure, platform and software - is supposed to be available at all times. Service providers from every layer give 99% and above availability guarantees and everyone claims that their services are resilient and failure-tolerant. While the track record of different providers can vary wildly, the real problem is that clients often forget that even 99.99% availability does not mean that a service will be accessible all of the time.
Let’s disregard for a moment the large differences between what each provider describes and considers as being available, and look exclusively at the numbers. 99.99% uptime of a service over the course of a year means that the service can be offline for about 52.56 minutes per year, or roughly one minute per week. This could account, for instance, for a server reboot every other week or so. As the uptime guarantee decreases, the downtime numbers obviously grow: for 99.9% uptime, the service can be offline for 525.6 minutes, or 8.76 hours, in a year; for 99% uptime, it would be 5256 minutes, or 87.6 hours, which is more than an hour and a half every week. While some of these numbers may seem small, they can severely impact systems and processes that aren’t ready for them.

Handling failures

The first thing that anyone who is looking to use cloud-based services or any kind must consider is the possibility of failure: what happens when I try to call the service and get back an error response or, even worse, get no response back? Retrying a request is an obvious answer, but also a problematic one. If a service goes offline for a significant amount of time, a “retry loop” can trap an application or create unexpected situations from which it can’t recover.
Even worse, issuing many retries can create a bottleneck at the receiving service, with even worse consequences. An interesting example of this was last year’s October outage of AWS, where they stopped accepting requests for creating EBS volumes and EC2 instances due to an excessive number of errors. In this sense, retrying requests can create a cascade of failures in interdependent services that becomes harder and harder to recover from.
Handling failures means having contingencies in place to handle unforeseen and unexpected situations. This not only means reacting and dealing with obvious failures, but also with situations that don’t clearly represent a failure. Let’s take a system that automatically launches virtual machine instances to do some processing: it must be prepared to handle the situation where the request for a new virtual machine is denied, but also for the situation where it receives a normal response for the request, but the virtual machine is never launched.

True availability

Availability issues become even more pronounced when we consider the track record of service providers. While almost everyone will promise aggressive SLAs (99%+ uptime), the fact is that many of the top tier providers routinely fail to deliver the promised levels of availability. If the systems that make use of these providers aren’t ready to handle a provider being unavailable for long periods of time, they will fail spectacularly in real life.
Another important point to take into consideration is that a system can only be as available as its underlying components. Many cloud software-as-a-service providers offer uptime guarantees that they can’t hope to match in real life, because these guarantees surpass what they are getting from their own infrastructure providers. If you’re looking for cloud-based software, always beware excessive promises. At the same time, take into consideration what it means to only have 99% availability: if the software were to stop working, can your business survive?
None of the issues discussed here are new. Most of them have been around since the advent of client-server architectures some decades ago, but sometimes users and developers forget the lessons of the past just because they are dealing with new technology. By remembering that cloud-based services are just like any other IT system and that eventual failures are expected, we can avoid many headaches when moving to the cloud.

Insider threats: Implementing the right controls


Takeaway: Describes the signs that an employee might become an insider threat and recommends the various controls and monitoring that can be implemented to mitigate such threats.
In Part 1 of this two-part series, I explored the three primary types of insider threats: theft of intellectual property by its creators, fraud by non-management personnel in critical need of cash, and damage to information resources by IT administrators. In Part 2, we examine what to look for in employee behavior as signals that something bad has or will happen. We also look at timing and controls for mitigating insider risk.

The signs

Most employees provide unintentional signals when they’re under significant pressure or when they perceive management is abusing them. Figure A is a list of possible signs that an employee is about to go rogue. In short, any significant change in behavior can be a sign that an employee’s loyalty is waning, including (from Prevent your employees from “going rogue“):
  • Appearing intoxicated at work
  • Sleeping at the desk
  • Unexplained, repeated absences on Monday or Friday
  • Pattern of disregard for rules
  • Drug abuse
  • Attempts to enlist others in questionable activities
  • Pattern of lying and deception of peers or managers
  • Talk of or attempt to harm oneself
  • Writing bad checks
  • Failure to make child support payments
  • Attempts to circumvent security controls
  • Long-term anger or bitterness about being passed over for promotion
  • Frustration with management for not listening to what the employee considers grave concerns about security or business processes
Figure A from Prevent your employees from “going rogue
Employees often behave themselves in front of their managers. Consequently, a problem employee’s peers are the best monitoring tool an organization has. Train all employees to watch for signs of discontent. Providing a means of anonymously reporting peers to management is often the best approach to dealing with concerns many employees have of “not getting involved” or being labeled a tattletale.

Designing the right controls

As with any threat, the controls framework must consist of administrative, physical, and technical components.  The overall control design should enforce separation of duties, least privilege, and need-to-know.  A miss in any of these areas weakens your ability to deal with inevitable insider threats.

Administrative controls

Policies form the foundation. Clear statements of management intent serve two purposes. First, they make it clear to all employees what is and is not acceptable behavior and the consequences of behaving in unacceptable ways. Second, when supported by well-documented standards, guidelines, and procedures, they provide all employees with the capability to identify anomalous behavior in their peers, subordinates, and supervisors. Policies define acceptable behavior and enable every employee to detect rogue behavior.
The two objectives of policies described above are achieved only if all employees are aware of management’s expectations and how they affect each employee’s day-to-day work environment.Security training and continuous awareness activities fill this need.

Physical controls

Physical controls serve to deter, delay, detect, and respond to unauthorized personnel. Further, they control who can access physical resources (e.g., servers, routers, and switches) and when. The use of electronic physical controls adds logging and near-real-time oversight to physical access.
In many organizations, physical security is managed outside the security team. This does not mean, however, that security managers should simply ignore it. Any physical access to information resources circumvents most, if not all, technical controls. Understanding how to conduct a physical security gap analysis is the first step in engaging in the physical controls discussion.

Technical controls

Technical controls include identity management, authentication, authorization, and accountability. These control categories work together to reach the following access control objectives:
  • Identity management ensures each person and computer is assigned a meaningful set of attributes for use in the authentication and authorization steps. The identity provides a subject (an entity attempting to access a resource) with a manageable, trackable presence across an enterprise.
  • Authentication is the process of making an entity prove it is who or what it claims to be. Common controls include passwords, biometrics, and smartcards.
  • Authorization is the process of using the subject’s attributes to determine what it can access (need-to-know), what it can do with what it accesses (least privilege), and when access is allowed. In addition, authorization enforces both static and dynamic separation of duties. Separation of duties prevents any single subject from performing all tasks associated with a business process.
  • Accountability includes auditing, monitoring, and ensuring security teams understand what subject accessed a critical resource, when the resource was accessed, and what was done. In addition to monitoring authorized access, security teams should receive alerts when the number of unauthorized access attempts exceeds a predefined threshold.
Separation of duties and least privilege are two primary constraints limiting what an insider can achieve. For example, an organization in which separation of duties and least privilege are enforced makes it difficult for a payroll clerk to commit fraud. The clerk wouldn’t be able to modify employee records AND enter time worked information AND approve payroll AND print checks/perform electronic transfers AND pick-up or distribute payments. To execute all of these tasks would require collusion: enlisting others in the theft.
Another example of separation of duties is preventing developers from placing new or modified applications into production. All code changes should be governed by a strict, closely managed, and distributed change management system. This helps prevent a developer or administrator from placing damaging code into production systems.
When assessing least privilege, consider whether the organization should allow copying of information to mobile storage devices (e.g., thumb drives, laptops, smartphones, etc.). Is it really necessary for everyone to remove information from within your organization’s trust boundary? Similarly, what is the risk associated with allowing employees to access personal email accounts and file transfer services (e.g., Transferbigfiles.com) while at the office? Actually, it depends.

Monitoring and filtering

When attempting to detect internal threat actions, start with a good security information and event management (SIEM) system. The SIEM solution looks for anomalous behavior based on activity across one or more devices. It supports prevention and response controls and processes. Finally, be sure to enable logging for access to your valuable files, financial systems, and other critical systems.
Filtering solutions support monitoring in two ways. First, all data transfers are checked for sensitive information. With some systems, application of business policies prevents or restricts certain types of transfers. Filtering is also a great method of tracking what goes out via email. In any case, alerting is key when a questionable transfer occurs: including a large file transfer at an odd time or between questionable locations.
NetFlow analysis supports filtering and logging solutions by identifying unusual activity across network segments and between systems. Often, it ships with the SIEM solution, so an organization doesn’t have to purchase an additional product. Once tuned to accept normal traffic patterns, it is a valuable tool for identifying anomalous data transfers.
Second, we can simply deny employees access to Internet locations used for extracting stolen data. Products like Websense or OpenDNS allow organizations to control access to external email and data transfer/storage sites. Blocking access is critical if no filtering solution exists. It is also critical during an employee’s transition.

Timing

According to the CERT Insider Threat Center, most thefts of intellectual property occur during the month before and the month after an employee leaves the company. This timeline also applies to IT insiders placing time bombs, back doors, etc., into production systems. Regardless of whether or not anyone reports one or more of the behaviors listed earlier, it is simply good security to check the past behavior of an employee once he or she gives notice.
Behavior checking should include accounts created, files accessed, data transfers completed, and any other activity relevant to moving data out of your network. Checking for unusual or seldom used administrator accounts is important.  However, organizations shouldn’t wait until someone gives notice before they audit privileged accounts. This should be part of normal auditing processes.
Finally, fraud usually takes place over long periods having nothing to do with when an employee leaves. In fact, leaving employment denies an insider access to the collusion-based network necessary to continue the flow of ill-gotten gains. Auditing and employee education are the best monitoring tools available for fraudulent behavior in process.

The final word

Trusted employees can go rogue for a number of reasons, some of which have nothing to do with how they’re treated at the office. While the reasons might vary, the insider-driven financial damage suffered by businesses each year demonstrates the need for closer monitoring of all key employees. I am not implying all employees are dishonest. However, the time will come when someone you trust crosses the line.
Detecting those that plan to do harm is often very difficult unless employee awareness, monitoring, alerting, and response are in place. Further, consider detailed analysis of a departing employee’s system and network behavior in accordance with clearly documented and distributed policies.