Thursday, 16 April 2015

Setting up a PXE-Boot Server WITH HTTP & FTP



you'll need to install the following packages (which ship with FC4 already, so if you did an 'everything' OS install, you should have them already. If not, you can install them easily with yum):
tftp-server
dhcp
httpd
syslinux
If you use yum to install them, then it will be generally alot easier:
yum install tftp-server dhcp httpd syslinux
answer Y to all dependency/installation questions.
DHCP Configurations:Go to  /etc/dhcpd.conf with the following contents:
ddns-update-style interim;
subnet 192.168.7.0 netmask 255.255.255.0 {
range 192.168.7.10 192.168.7.254;
default-lease-time 3600;
max-lease-time 4800;
option routers 192.168.7.1;
option domain-name-servers 192.168.7.1;
option subnet-mask 255.255.255.0;
option domain-name "abc.local";
option time-offset -8;
}
host abc.xyz.local {
hardware Ethernet 08:00:27:47:5E:03 ;
fixed-address 192.168.7.95;
option host-name " abc.xyz.local";
filename "/tftpboot/pxelinux.0";
}

 Next you need to activate tftp within xinetd. All that is neccesary is to change disable=yes to disable=no in /etc/xinetd.d/tftp . Then restart xinetd. For future reference, the tftp RPM for FC4 stores its servable content under /tftpboot.
Now we need to setup your PXE server to use a static IP on the new private subnet. Create the file /etc/sysconfig/network-scripts/ifcfg-eth0.static with the following contents:
DEVICE=eth0
BOOTPROTO=STATIC
ONBOOT=no
TYPE=Ethernet
IPADDR=192.168.7.2
NETMASK=255.255.255.0
GATEWAY=192.168.7.1

 Need to setup the PXE boot environment on the server. To do this, you need to have either the Linux distribution that you wish to install over PXE either in CD format, or all the content of the CDs available on the network.
On the first CD of every RH/FC distribution there is a subdirectory called 'isolinux'. In that directory you will find two files, vmlinuz and initrd.img. These are the kernel & initrd.img that the RH/FC bootable CDs use to get the installer (anaconda) booted for performing the installation. Copy both of those files into /tftpboot and make sure that they are world readable. If you are planning to allow more than one version/distribution to be PXE boot installable, then you should rename both files so that its clear that they are for whatever version/distribution they came from (such as vmlinuz-RHEL4, initrd-RHEL4).
Next, you need the actual pxe boot linux kernel (what is actually run immediately after your PXE boot client box gets a DHCP lease). In this case, that file is pxelinux.0, and is part of the syslinux RPM. For FC4, you can find it at /usr/lib/syslinux/pxelinux.0. Copy that file into /tftpboot and make sure that it is world readable.
Next we need to configure pxelinux. First create the directory /tftpboot/pxelinux.cfg (and make it world readable). Inside that directory you need to create a number of zero size files (use touch):
01-04-4B-80-80-80-03
C
C0
C0A
C0A8
C0A80
C0A800
C0A800F
C0A800FE
01-04-4B-80-80-80-03
The first 8 are the hex representation of the 192.168.0.254 IP address that your PXE boot client will be assigned. The permutations allow a broader IP subnet to be searched first for matches. The last entry is the MAC address of your PXE boot client's NIC (with dashes substituted for the colons), with '01' pre-pended. The "01" at the front represents a hardware type of Ethernet, so pxelinux.0 see's the configuration string as an IP address.
Now create the default pxelinux configuration inside the new file
/tftpboot/pxelinux.cfg/default:
prompt 1
default linux
timeout 100
label linux
kernel vmlinuz
append initrd=initrd.img ramdisk_size=9216 noapic
cpi=off

NFS Configurations:
Copy media @ /var/ftp/pub/RedHat
With 777 permissions on all files in RedHat Directory
Chmod 777 RedHat
Go to /etc/exports and add
/var/ftp/pub/RedHat 192.168.7.0/24(rw,sync)
:wq
Then start nfs service
FTP Configurations:
Start VSFTPD service for FTP
chkconfig vsftpd on
service vsftpd start
HTTPD Configuration:
Vi /etc/httpd.conf
Change document Root to
/var/ftp/pub/RedHat
<Directory /var/ftp/pub/RedHat>
Options Indexes
AllowOverride None
</Directory>
Alias /linux /var/ftp/pub/RedHat
Now Create a virtual Host
<virtual Host *:80>
ServerAdmin admin@system qualified name
DocumentRoot /var/ftp/pub/RedHatServerName System qualified name
Error log logs/system qualified name-error_log
Custom log logs/system qualified name-access_log common
</Virtual Host>

  Now start dhcpd & apache and activate tftp by running the following:
service dhcpd start
service xinetd restart
service httpd startand verify that they are all in your process list.

Thursday, 2 April 2015

Oracle Security at Risk




Oracle Security at Risk: 

Java.net Pwn3d By a White Hat Hacker!

Usually, Big Companies are in a Top-Level in terms of Cyber Security! Unfortunately is not the case of ORACLE, the notorious software-house of Java.


An Information Security Researcher, Christian Galeone - Italy, demonstrated how a Single BIG Security Vulnerability. May represent a Severe Threat to Big Companies and even to their Employees!.

What he has found was a Path Traversal / LFI - Local File Inclusion Vulnerability into Java JDK7 Website!.


After his Exploitation, he noticed that Important Sensible Server-Side Data(s) were contained in it.

The Vulnerability nor only allowed him to display the Web Server Credentials including the R00T Access but into his Vulnerable Source Code they have (wrongly) disclosed more than 460+ Private Email Addresses of their Employees! - is a BIG Issue if you're worried about BlackHat Hackers ;-)



After his finding, he Fastly reported it to their Security Team which fixed it in 1 Single Day and decided to Acknowledge Christian for his Ethical Behaviour by adding him into their Next CPU (Critical Patch Update) for the next roll of 14 April 2015!.

 
106 0 2615


Oracle Security at Risk: 

Java.net Pwn3d By a White Hat Hacker!

Usually, Big Companies are in a Top-Level in terms of Cyber Security! Unfortunately is not the case of ORACLE, the notorious software-house of Java.

An Information Security Researcher, Christian Galeone - Italy, demonstrated how a Single BIG Security Vulnerability. May represent a Severe Threat to Big Companies and even to their Employees!.

What he has found was a Path Traversal / LFI - Local File Inclusion Vulnerability into Java JDK7 Website!.



After his Exploitation, he noticed that Important Sensible Server-Side Data(s) were contained in it.

The Vulnerability nor only allowed him to display the Web Server Credentials including the R00T Access but into his Vulnerable Source Code they have (wrongly) disclosed more than 460+ Private Email Addresses of their Employees! - is a BIG Issue if you're worried about BlackHat Hackers ;-)




After his finding, he Fastly reported it to their Security Team which fixed it in 1 Single Day and decided to Acknowledge Christian for his Ethical Behaviour by adding him into their Next CPU (Critical Patch Update) for the next roll of 14 April 2015!.

- See more at: http://blog.hackersonlineclub.com/2015/04/oracle-security-at-risk-javanet-pwn3d.html#sthash.DaxhYTKj.dpuf

 
106 0 2615


Oracle Security at Risk: 

Java.net Pwn3d By a White Hat Hacker!

Usually, Big Companies are in a Top-Level in terms of Cyber Security! Unfortunately is not the case of ORACLE, the notorious software-house of Java.

An Information Security Researcher, Christian Galeone - Italy, demonstrated how a Single BIG Security Vulnerability. May represent a Severe Threat to Big Companies and even to their Employees!.

What he has found was a Path Traversal / LFI - Local File Inclusion Vulnerability into Java JDK7 Website!.



After his Exploitation, he noticed that Important Sensible Server-Side Data(s) were contained in it.

The Vulnerability nor only allowed him to display the Web Server Credentials including the R00T Access but into his Vulnerable Source Code they have (wrongly) disclosed more than 460+ Private Email Addresses of their Employees! - is a BIG Issue if you're worried about BlackHat Hackers ;-)




After his finding, he Fastly reported it to their Security Team which fixed it in 1 Single Day and decided to Acknowledge Christian for his Ethical Behaviour by adding him into their Next CPU (Critical Patch Update) for the next roll of 14 April 2015!.

- See more at: http://blog.hackersonlineclub.com/2015/04/oracle-security-at-risk-javanet-pwn3d.html#sthash.DaxhYTKj.dpuf

Friday, 27 March 2015

HDFS Overview - Hadoop



Hadoop File System was developed using distributed file system design. It is run on commodity hardware. Unlike other distributed systems, HDFS is highly faulttolerant and designed using low-cost hardware.
HDFS holds very large amount of data and provides easier access. To store such huge data, the files are stored across multiple machines. These files are stored in redundant fashion to rescue the system from possible data losses in case of failure. HDFS also makes applications available to parallel processing.

Features of HDFS

  1. It is suitable for the distributed storage and processing.
  2. Hadoop provides a command interface to interact with HDFS.
  3. The built-in servers of namenode and datanode help users to easily check the status of cluster.
  4. Streaming access to file system data.
  5. HDFS provides file permissions and authentication.

HDFS Architecture

Given below is the architecture of a Hadoop File System.
HDFS Architecture
HDFS follows the master-slave architecture and it has the following elements.

Namenode

The namenode is the commodity hardware that contains the GNU/Linux operating system and the namenode software. It is a software that can be run on commodity hardware. The system having the namenode acts as the master server and it does the following tasks:
  1. Manages the file system namespace.
  2. Regulates client’s access to files.
  3. It also executes file system operations such as renaming, closing, and opening files and directories.

Datanode

The datanode is a commodity hardware having the GNU/Linux operating system and datanode software. For every node (Commodity hardware/System) in a cluster, there will be a datanode. These nodes manage the data storage of their system.
  1. Datanodes perform read-write operations on the file systems, as per client request.
  2. They also perform operations such as block creation, deletion, and replication according to the instructions of the namenode.

Block

Generally the user data is stored in the files of HDFS. The file in a file system will be divided into one or more segments and/or stored in individual data nodes. These file segments are called as blocks. In other words, the minimum amount of data that HDFS can read or write is called a Block. The default block size is 64MB, but it can be increased as per the need to change in HDFS configuration.

Goals of HDFS


  1. Fault detection and recovery : Since HDFS includes a large number of commodity hardware, failure of components is frequent. Therefore HDFS should have mechanisms for quick and automatic fault detection and recovery.
  2. Huge datasets : HDFS should have hundreds of nodes per cluster to manage the applications having huge datasets.
  3. Hardware at data : A requested task can be done efficiently, when the computation takes place near the data. Especially where huge datasets are involved, it reduces the network traffic and increases the throughput.

Monday, 23 March 2015

Introduction to Hadoop.




Hadoop is an open-source framework that allows to store and process big data in a distributed environment across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.


Hadoop is an Apache open source framework written in java that allows distributed processing of large datasets across clusters of computers using simple programming models. A Hadoop frame-worked application works in an environment that provides distributed storage and computation across clusters of computers. Hadoop is designed to scale up from single server to thousands of machines, each offering local computation and storage.

Hadoop Architecture

Hadoop framework includes following four modules:
  • Hadoop Common: These are Java libraries and utilities required by other Hadoop modules. These libraries provides filesystem and OS level abstractions and contains the necessary Java files and scripts required to start Hadoop.
  • Hadoop YARN: This is a framework for job scheduling and cluster resource management.
  • Hadoop Distributed File System (HDFS™): A distributed file system that provides high-throughput access to application data.
  • Hadoop MapReduce: This is YARN-based system for parallel processing of large data sets.
We can use following diagram to depict these four components available in Hadoop framework.
Hadoop Architecture
Since 2012, the term "Hadoop" often refers not just to the base modules mentioned above but also to the collection of additional software packages that can be installed on top of or alongside Hadoop, such as Apache Pig, Apache Hive, Apache HBase, Apache Spark etc.

MapReduce

Hadoop MapReduce is a software framework for easily writing applications which process big amounts of data in-parallel on large clusters (thousands of nodes) of commodity hardware in a reliable, fault-tolerant manner.
The term MapReduce actually refers to the following two different tasks that Hadoop programs perform:
  • The Map Task: This is the first task, which takes input data and converts it into a set of data, where individual elements are broken down into tuples (key/value pairs).
  • The Reduce Task: This task takes the output from a map task as input and combines those data tuples into a smaller set of tuples. The reduce task is always performed after the map task.
Typically both the input and the output are stored in a file-system. The framework takes care of scheduling tasks, monitoring them and re-executes the failed tasks.
The MapReduce framework consists of a single master JobTracker and one slave TaskTracker per cluster-node. The master is responsible for resource management, tracking resource consumption/availability and scheduling the jobs component tasks on the slaves, monitoring them and re-executing the failed tasks. The slaves TaskTracker execute the tasks as directed by the master and provide task-status information to the master periodically.
The JobTracker is a single point of failure for the Hadoop MapReduce service which means if JobTracker goes down, all running jobs are halted.

Hadoop Distributed File System

Hadoop can work directly with any mountable distributed file system such as Local FS, HFTP FS, S3 FS, and others, but the most common file system used by Hadoop is the Hadoop Distributed File System (HDFS).
The Hadoop Distributed File System (HDFS) is based on the Google File System (GFS) and provides a distributed file system that is designed to run on large clusters (thousands of computers) of small computer machines in a reliable, fault-tolerant manner.
HDFS uses a master/slave architecture where master consists of a single NameNode that manages the file system metadata and one or more slave DataNodes that store the actual data.
A file in an HDFS namespace is split into several blocks and those blocks are stored in a set of DataNodes. The NameNode determines the mapping of blocks to the DataNodes. The DataNodes takes care of read and write operation with the file system. They also take care of block creation, deletion and replication based on instruction given by NameNode.
HDFS provides a shell like any other file system and a list of commands are available to interact with the file system. These shell commands will be covered in a separate chapter along with appropriate examples.

How Does Hadoop Work?

Stage 1

A user/application can submit a job to the Hadoop (a hadoop job client) for required process by specifying the following items:
  1. The location of the input and output files in the distributed file system.
  2. The java classes in the form of jar file containing the implementation of map and reduce functions.
  3. The job configuration by setting different parameters specific to the job.

Stage 2

The Hadoop job client then submits the job (jar/executable etc) and configuration to the JobTracker which then assumes the responsibility of distributing the software/configuration to the slaves, scheduling tasks and monitoring them, providing status and diagnostic information to the job-client.

Stage 3

The TaskTrackers on different nodes execute the task as per MapReduce implementation and output of the reduce function is stored into the output files on the file system.

Advantages of Hadoop




  • Hadoop framework allows the user to quickly write and test distributed systems. It is efficient, and it automatic distributes the data and work across the machines and in turn, utilizes the underlying parallelism of the CPU cores.
  • Hadoop does not rely on hardware to provide fault-tolerance and high availability (FTHA), rather Hadoop library itself has been designed to detect and handle failures at the application layer.
  • Servers can be added or removed from the cluster dynamically and Hadoop continues to operate without interruption.
  • Another big advantage of Hadoop is that apart from being open source, it is compatible on all the platforms since it is Java based.

Thursday, 29 January 2015

SEO - Relevant Filenames


One of the simplest methods to improve your search engine optimization is to look at the way you name your files. Before writing this tutorial, we did a lot of research on file-names and found that search engines like Google give too much importance to file names. You should think what you want put in your web page and then give a relevant file name to this page.
Just try giving any keyword in Google search engine and you will find file names highlighted with the keyword you have given. It proves that your file name should have appropriate keywords.

File Naming Style

  • The filename should preferably be short and descriptive.
  • It is always good to use same keywords in a filename as well as in page title.
  • Do not use filenames such as service.htm or job.htm as they are generic. Use actual service name in your file name such as computer-repairing.htm.
  • Do not use more than 3-4 words in file names.
  • Separate the keywords with hyphens rather than underscores.
  • Try to use 2 keywords if possible.

File Name Example

Listed below are some filenames which would be ideal from the users' point of view as well as SEO.
slazenger-brand-balls.html
wimbledon-brand-balls.html
wilson-brand-balls.html
Notice that the keywords are separated by hyphens rather than underscores. Google sees good filenames as follows:
seo-relevant-filename as seo relevant filename(good)
Filenames with underscores are not a good option.
seo_relevant_filename as seorelevantfilename (not good)

File Extension

You should notice that .html, .htm, .php and any other extension do NOTHING for your visitors, and they are simply a means of offloading some of the work of configuring your webserver properly onto your visitor's. In effect, you are asking your site visitors to tell your webserver HOW to produce the page, not which one?
Many Web masters think that it is a good idea to use filename without using extension. It may help you, but not a whole lot.

URL Sub-Directory Name

From Search Engine Optimization point of view, URL sub-directory name hardly matters. You can try giving any keyword in any search, and you will not find any sub-directory name matching with your keywords. But from the user's point of view, you should keep an abbreviated sub-directory name.

Guru Mantra

Keep the following points in mind before naming your files:
  • Keep the web page filename short, simple, descriptive, and relevant to the page content.
  • Try to use a maximum of 3-4 keywords in your filename, and these keywords should appear on your web page title as well.
  • Separate all keywords with hyphen rather than with underscore.
  • Keep your sub-directories name as short as possible.
  • Restrict the file size to less than 101K because Google chops almost everything above that.

Wednesday, 28 January 2015

Web Site Domain search Engine optimization




When you start thinking of doing a business through internet, the first thing that you think about is your website domain name. Before you choose a domain name, you should consider the following:
  • Who would be your target audience?
  • What you intend to sell to them. Is it a tangible item or just text content?
  • What will make your business idea unique or different from everything else that is already avilable in the market?
Many people think it is important to have keywords in a domain. Keywords in the domain name are usually important, but it usually can be done while keeping the domain name short, memorable, and free of hyphens.
Using keywords in your domain name gives you a strong competitive advantage over your competitors. Having your keywords in your domain name can increase click-through-rates on search engine listings and paid ads as well as make it easier to use your keywords in get keyword rich descriptive inbound links.
Avoid buying long and confusing domain names. Many people separate the words in their domain names using either dashes or hyphens. In the past, the domain name itself was a significant ranking factor but now search engines have advanced features and it is not a very significant factor anymore.
Keep two to three words in your domain name that will be easy to memorize. Some of the most notable websites do a great job of branding by creating their own word. Few examples are eBay, Yahoo!, Expedia, Slashdot, Fark, Wikipedia, Google, etc.
You should be able to say it over the telephone once, and the other person should know how to spell it, and they should be able to guess what you sell.
Guru Mantra
Finally, you should be able to answer the following questions:
  • Why do you want to build your website?
  • Why should people buy off your site and not from other site?
  • What makes you different from others?
  • Who are your target audience and what do you intend to sell?
  • List 5 to 10 websites that you think are amazing. Now think why they are amazing.
  • Create 5 different domain names. Make at least 1 of them funny. Tell them to half a dozen people and see which ones are the most memorable. You will get more honest feedback if the people do not know you well.
  • Buy your domain name that is catchy, memorable, and relevant to your business.

What is Search Engine optimization (SEO) ?



Search Engine Optimization (SEO) is the activity of optimizing web pages or whole sites in order to make them search engine friendly, thus getting higher positions in search results.
This tutorial explains simple SEO techniques to improve the visibility of your web pages for different search engines, especially for Google, Yahoo, and Bing.

Audience

This tutorial has been prepared for beginners to help them understand the simple but effective SEO characteristics.

Prerequisites

We assume you are aware of simple web technologies such as HTML, XHTML, Style Sheet, etc. If you already have developed any website, then it is an added advantage and it will help you understand the concepts of SEO explained in this tutorial.

SEO stands for Search Engine
Optimization. SEO is all about optimizing a website for search
engines. SEO is a technique for:
  • designing and developing a website to rank well in search engine results.
  • improving the volume and quality of traffic to a website from search engines.
  • marketing by understanding how search algorithms work, and what human visitors might search.



SEO is a subset of search engine marketing. SEO is also referred as SEO copyrighting, because most of the techniques that are used to promote sites in search engines, deal with text.
If you plan to do some basic SEO, it is essential that you understand how search engines work.

How Search Engine Works?

Search engines perform several activities in order to deliver search results.
  • Crawling - Process of fetching all the web pages linked to a website. This task is performed by a software, called a crawler or a spider (or Googlebot, in case of Google).
  • Indexing - Process of creating index for all the fetched web pages and keeping them into a giant database from where it can later be retrieved. Essentially, the process of indexing is identifying the words and expressions that best describe the page and assigning the page to particular keywords.
  • Processing - When a search request comes, the search engine processes it, i.e. it compares the search string in the search request with the indexed pages in the database.
  • Calculating Relevancy - It is likely that more than one page contains the search string, so the search engine starts calculating the relevancy of each of the pages in its index to the search string.
  • Retrieving Results - The last step in search engine activities is retrieving the best matched results. Basically, it is nothing more than simply displaying them in the browser.
Search engines such as Google and Yahoo! often update their relevancy algorithm dozens of times per month. When you see changes in your rankings it is due to an algorithmic shift or something else outside of your control.
Although the basic principle of operation of all search engines is the same, the minor differences between their relevancy algorithms lead to major changes in results relevancy.

What is SEO Copywriting?

SEO Copywriting is the technique of writing viewable text on a web page in such a way that it reads well for the surfer, and also targets specific search terms. Its purpose is to rank highly in the search engines for the targeted search terms.
Along with viewable text, SEO copywriting usually optimizes other on-page elements for the targeted search terms. These include the Title, Description, Keywords tags, headings, and alternative text.
The idea behind SEO copywriting is that search engines want genuine content pages and not additional pages often called "doorway pages" that are created for the sole purpose of achieving high rankings.

What is Search Engine Rank?

When you search any keyword using a search engine, it displays thousands of results found in its database. A page ranking is measured by the position of web pages displayed in the search engine results. If a search engine is putting your web page on the first position, then your web page rank will be number 1 and it will be assumed as the page with the highest rank.
SEO is the process of designing and developing a website to attain a high rank in search engine results.

What is On-Page and Off-page SEO?

Conceptually, there are two ways of optimization:
  • On-Page SEO - It includes providing good content, good keywords selection, putting keywords on correct places, giving appropriate title to every page, etc.
  • Off-Page SEO - It includes link building, increasing link popularity by submitting open directories, search engines, link exchange, etc.






Thursday, 22 January 2015

What is OpenStack ?



OpenStack is a set of software tools for building and managing cloud computing platforms for public and private clouds. Backed by some of the biggest companies in software development and hosting, as well as thousands of individual community members, many think that OpenStack is the future of cloud computing. OpenStack is managed by the OpenStack Foundation, a non-profit which oversees both development and community-building around the project.

Introduction to OpenStack

OpenStack lets users deploy virtual machines and other instances which handle different tasks for managing a cloud environment on the fly. It makes horizontal scaling easy, which means that tasks which benefit from running concurrently can easily serve more or less users on the fly by just spinning up more instances. For example, a mobile application which needs to communicate with a remote server might be able to divide the work of communicating with each user across many different instances, all communicating with one another but scaling quickly and easily as the application gains more users.
And most importantly, OpenStack is open source software, which means that anyone who chooses to can access the source code, make any changes or modifications they need, and freely share these changes back out to the community at large. It also means that OpenStack has the benefit of thousands of developers all over the world working in tandem to develop the strongest, most robust, and most secure product that they can.

How is OpenStack used in a cloud environment?

The cloud is all about providing computing for end users in a remote environment, where the actual software runs as a service on reliable and scalable servers rather than on each end users computer. Cloud computing can refer to a lot of different things, but typically the industry talks about running different items "as a service"—software, platforms, and infrastructure. OpenStack falls into the latter category and is considered Infrastructure as a Service (IaaS). Providing infrastructure means that OpenStack makes it easy for users to quickly add new instance, upon which other cloud components can run. Typically, the infrastructure then runs a "platform" upon which a developer can create software applications which are delivered to the end users.

What are the components of OpenStack?

OpenStack is made up of many different moving parts. Because of its open nature, anyone can add additional components to OpenStack to help it to meet their needs. But the OpenStack community has collaboratively identified nine key components that are a part of the "core" of OpenStack, which are distributed as a part of any OpenStack system and officially maintained by the OpenStack community.
  • Nova is the primary computing engine behind OpenStack. It is a "fabric controller," which is used for deploying and managing large numbers of virtual machines and other instances to handle computing tasks.
  • Swift is a storage system for objects and files. Rather than the traditional idea of a referring to files by their location on a disk drive, developers can instead refer to a unique identifier referring to the file or piece of information and let OpenStack decide where to store this information. This makes scaling easy, as developers don’t have the worry about the capacity on a single system behind the software. It also allows the system, rather than the developer, to worry about how best to make sure that data is backed up in case of the failure of a machine or network connection.
  • Cinder is a block storage component, which is more analogous to the traditional notion of a computer being able to access specific locations on a disk drive. This more traditional way of accessing files might be important in scenarios in which data access speed is the most important consideration.
  • Neutron provides the networking capability for OpenStack. It helps to ensure that each of the components of an OpenStack deployment can communicate with one another quickly and efficiently.
  • Horizon is the dashboard behind OpenStack. It is the only graphical interface to OpenStack, so for users wanting to give OpenStack a try, this may be the first component they actually “see.” Developers can access all of the components of OpenStack individually through an application programming interface (API), but the dashboard provides system administrators a look at what is going on in the cloud, and to manage it as needed.
  • Keystone provides identity services for OpenStack. It is essentially a central list of all of the users of the OpenStack cloud, mapped against all of the services provided by the cloud which they have permission to use. It provides multiple means of access, meaning developers can easily map their existing user access methods against Keystone.
  • Glance provides image services to OpenStack. In this case, "images" refers to images (or virtual copies) of hard disks. Glance allows these images to be used as templates when deploying new virtual machine instances.
  • Ceilometer provides telemetry services, which allow the cloud to provide billing services to individual users of the cloud. It also keeps a verifiable count of each user’s system usage of each of the various components of an OpenStack cloud. Think metering and usage reporting.
  • Heat is the orchestration component of OpenStack, which allows developers to store the requirements of a cloud application in a file that defines what resources are necessary for that application. In this way, it helps to manage the infrastructure needed for a cloud service to run.

Who is OpenStack for?

You may be an OpenStack user right now and not even know it! As more and more companies begin to adopt OpenStack as a part of their cloud toolkit, the universe of applications running on an OpenStack backend is ever-expanding.

How do I get started with OpenStack?

If you just want to give OpenStack a try, one good resource for spinning the wheels without committing any physical resources is TryStack. TryStack lets you test your applications in a sandbox environment to better understand how OpenStack works and whether it is the right solution for you.
Ready to learn more? Every month, we publish a collection of the best new guides, tips, tricks, tutorials for OpenStack.
OpenStack is always looking for new contributors. Consider joining the OpenStack Foundation or reading this introduction to getting started with contributing to OpenStack.


Tuesday, 13 January 2015

How to Enable/Disable Web Access in VMWare ESXi Server





This article provides steps to disable VMware Web Access services and prevent the login user interface from appearing via http and https. It also provides steps to enable VMware Web Access services once again in the future.


To disable VMware Web Access:
  1. Log in as root to the ESX host using an SSH client.
  2. Run this command to stop the VMware Web Access service:

    service vmware-webAccess stop

  3. To prevent the service from starting again upon reboot, run the command:

    chkconfig –-level 345 vmware-webAccess off

  4. After disabling the Web Access service you can browse the index page of the ESX host and download the VMware Infrastructure or vSphere Client from it, but you cannot log into Web Access from the page.
    To enable VMware Web Access:
    1. Log in as root to the ESX host using an SSH client.
    2. Run this command to determine if the VMware Web Access service is running:

      service vmware-webAccess status

    3. Run this command to start the VMware Web Access service:

      service vmware-webAccess start

    4. To enable the service to start upon reboot, run this command:

      chkconfig --level 345 vmware-webAccess on
      Note: The ESX firewall must allow webAccess communication or the service will not start when ESX 4.0 boots. To enable webAccess in the ESX firewall, run this command on the ESX host:

      esxcfg-firewall --enableService webAccess
    5. If the VMware Web Access does not start after rebooting the ESX host:
      1. Select the ESX host in vSphere Client.
      2. Click the Configuration tab > Security Profile.
      3. Check vSphere Web Access, then run this command on the ESX console:

        chkconfig --level 345 vmware-webAccess on

Sunday, 11 January 2015

How to create Group Related Programs Together on Windows 10?


Windows_Product_Family_9-30-Event-741x416 

When you are on your computer at work, it is normal to have several programs running at once, some for work and some for personal use. Unfortunately, as you know, having unrelated programs running on your desktop may easily cause a distraction and reduce your productivity. Ideally, you could group the programs together according to type and use only one group while the other remains hidden. That way, you can increase the chances of your finishing your work on time because the working desktop environment is distraction-free. This idea can be achieved using the Task Viewer feature in Windows 10, which allows you to create multiple desktop environments for different purposes.

1. Launch all the programs that you want to be in the first group. For example, this group contains your programs for work.

2. Reveal the taskbar if it is hidden by moving the mouse to the bottom of screen, and then click the Task View button, the icon of two stacked windows.


Windows 10 Task View

3. Immediately after you click the button, the system will display a series of thumbnails for programs running on the current desktop and an Add a desktop button at the bottom of the screen. If you click that button, the system will create a brand new desktop for you.

Windows 10 Create new desktop view

4. Click the second desktop thumbnail to set it as the active one. Once you are inside the second desktop, you will notice that all previously opened windows are now gone. Now, it is time to open programs for your second group.

5. To switch between desktops, come back to Task View, and then choose the desired one. You may notice that it is possible to add another desktop using the plus button.

Windows 10 switch between desktops

6. When you are in one desktop, you may notice the little highlight under some programs indicating that they are opened in the other desktop. If you click on that icon, you will be redirected to desktop that contains the program.




Windows 10 switch between desktops

7. To close a desktop, go back to Task View, hover your mouse over the desktop which you want to close, and then click the X button. Since the desktop is closed, apps opened inside that desktop will be closed as well.

How to Customize the Start Menu on Windows 10?



tech-preview_start-menu-970x548-c 

With the release of Windows 10, Microsoft decided to bring back the Start menu that many users complained about missing in Windows 8. However, the new Start menu also includes an area for live tiles which work similarly to the ones in the old release of Windows. Using the steps in this tutorial, you can customize the Start menu to suit your needs and workflow.

1. Click the Windows logo at the bottom left of the screen to open the Start menu. Alternatively, you can open it by pressing the Win logo on your keyboard.

10-6-2014 9-15-20 PM



2. When the Start screen appears, you will see a grid of live tiles on the right size. If you right-click on one of them, you will see several options including: Unpin from Start, Pin to taskbar, Uninstall, Resize, and Turn live tile off. Under Resize, there are several sizes from which to choose: small, medium, wide, and large. As soon as you choose the size, the chosen tile will be changed and moved accordingly. The size of the Start menu will increase or decrease to adapt to the change, if necessary.
To move a tile, long-click on it, then drag it to the desired location. If there is a tile in that location, it will be moved to a slot nearby.
Un-pinned apps can still be found in the app list.




10-6-2014 9-18-15 PM

3. To pin a program to the Start menu, right-click on it, and choose Pin to Start. If it is a Metro app, you will see a live tile. Otherwise, the tile will contain the app’s icon.





10-6-2014 9-19-47 PM


4. The Start menu can be resized to several predefined shapes as seen in the examples below. Resizing the Start menu is similar to resizing a normal app’s window. All you have to do is move the mouse to the top edge, and then drag it down until it resizes to the shape that you want.

10-6-2014 9-22-41 PM



 10-6-2014 9-23-06 PM

 10-6-2014 9-24-40 PM