yourfanat wrote: I am using another tool for Oracle developers - dbForge Studio for Oracle. This IDE has lots of usefull features, among them: oracle designer, code competion and formatter, query builder, debugger, profiler, erxport/import, reports and many others. The latest version supports Oracle 12C. More information here.
Cloud Expo on Google News
Cloud Expo & Virtualization 2009 East
Smarter Business Solutions Through Dynamic Infrastructure
Smarter Insights: How the CIO Becomes a Hero Again
Windows Azure
Why VDI?
Maximizing the Business Value of Virtualization in Enterprise and Cloud Computing Environments
Messaging in the Cloud - Email, SMS and Voice
Freedom OSS
Stairway to the Cloud
Sun's Incubation Platform: Helping Startups Serve the Enterprise
Cloud Computing & Enterprise IT: Cost & Operational Benefits
How and Why is a Flexible IT Infrastructure the Key To the Future?
Click For 2008 West
Event Webcasts
Book Excerpt: Making Software a Service
Part 1 - Instead of developing a simple desktop application, you can develop your software as a service

Developing your Software as a Service (SaaS) takes you away from the dark ages of programming and into the new age in which copyright protection, DMA, and pirating don't exist. In the current age of computing, people don't expect to pay for software but instead prefer to pay for the support and other services that come with it. When was the last time anyone paid for a web browser? With the advent of Open Source applications, the majority of paid software is moving to hosted systems which rely less on the users' physical machines. This means you don't need to support more hardware and other software that may conflict with your software, for example, permissions, firewalls, and antivirus software.

Instead of developing a simple desktop application that you need to defend and protect against pirating and cloning, you can develop your software as a service; releasing updates and new content seamlessly while charging your users on a monthly basis. With this method, you can charge your customers a small monthly fee instead of making them pay a large amount for the program upfront, and you can make more money in the long run. For example, many people pirate Microsoft Office instead of shelling out $300 upfront for a legal copy, whereas if it were offered software online in a format such as Google Docs, those same people might gladly pay $12.50 a month for the service. Not only do they get a web-based version that they can use on any computer, but everything they save is stored online and backed up. After two years of that user paying for your service, you've made as much money from that client as the desktop version, plus you're ensuring that they'll stay with you as long as they want to have access to those documents. However, if your users use the software for a month and decide they don't like it, they don't need to continue the subscription, and they have lost only a small amount of money. If you offer a trial-based subscription, users can test your software at no cost, which means they're more likely to sign up.

Tools Used in This Book
You need to take a look at some of the tools used throughout this book. For the examples, the boto Python library is used to communicate with Amazon Web Services. This library is currently the most full-featured Python library for interacting with AWS, and it's one I helped to develop. It's relatively easy to install and configure, so you can now receive a few brief instructions here. boto currently works only with Python 2.5 to 2.7, not Python 3. It's recommended that you use Python 2.6 for the purposes of this book.

Signing Up for Amazon Web Services
Before installing the libraries required to communicate with Amazon Web Services, you need to sign up for an account and any services you need. This can be done by going to and choosing Sign Up Now and following the instructions. You need to provide a credit card to bill you for usage, but you won't actually be billed until the end of each month. You can log in here at any time to sign up for more services. You pay for only what you use, so don't worry about accidentally signing up for too many things. At a minimum, you need to sign up for the following services:

  • Elastic Compute Cloud (EC2)
  • Simple Storage Service (S3)
  • SimpleDB
  • Simple Queue Service (SQS)

After you create your account, log in to your portal by clicking Account and then choosing Security Credentials. Here you can see your Access Credentials, which will be required in the configuration section later. At any given time you may have two Access keys associated with your account, which are your private credentials to access Amazon Web Services. You may also inactivate any of these keys, which helps when migrating to a new set of credentials because you may have two active until everything is migrated over to your new keys.

Installing boto
You can install boto in several different ways, but the best way to make sure you're using the latest code is to download the source from github at There are several different ways to download this code, but the easiest is to just click the Downloads button and choose a version to download. Although the master branch is typically okay for development purposes, you probably want to just download the latest tag because that's guaranteed to be stable, and all the tests have been run against it before bundling. You need to download that to your local disk and unpack it before continuing.

The next step will be to actually install the boto package. As with any Python package, this is done using the file, with either the install or develop command. Open up a terminal, or command shell on Windows, change the directory to where you downloaded the boto source code, and run

$ python install

Depending on what type of system you run, you may have to do this as root or administrator. On UNIX-based systems, this can be done by prepending sudo to the command:

$ sudo python install

On Windows, you should be prompted for your administrative login if it's required, although most likely it's not.

Setting Up the Environment
Although there are many ways to set up your environment for boto, use the one that's also compatible with using the downloaded Amazon Tools, which you can find at Each service has its own set of command-line-based developer tools written in Java, and most of them enable you to also use the configuration file shown here to set up your credentials. Name this file credentials.cfg and put it somewhere easily identified:


You can make this the active credential file by setting an environment variable AWS_CREDENTIAL_FILE and pointing it to the full location of this file. On bash-based shells, this can be done with the following:

export AWS_CREDENTIAL_FILE=/full/path/to/credentials.cfg

You can also add this to your shell's RC file, such as .bashrc or .zshrc, or add the following to your .tcshrc if you use T-Shell instead:

setenv AWS_CREDENTIAL_FILE=/full/path/to/credentials.cfg

For boto, create a boto.cfg that enables you to configure some of the more boto-specific aspects of your systems. Just like in the previous example, you need to make this file and then set an environment variable, this time BOTO_CONFIG, to point to the full path of that file. Although this configuration file isn't completely necessary, some things can be useful for debugging purposes, so go ahead and make your boto.cfg:

# File: boto.cfg
# Imitate some EC2 configs
local-ipv4 =
local-hostname = localhost
security-groups = default
public-ipv4 =
public-hostname = my-public-hostname.local
hostname = localhost
instance-type = m1.small
instance-id = i-00000000

# Set the default SDB domain
db_name = default

# Set up base logging






format=%(asctime)s [%(name)s] %(levelname)s %(message)s

The first thing to do here is set up an [Instance] section that makes your local environment act like an EC2 instance. This section is automatically added when you launch a boto-based EC2 instance by the startup scripts that run there. These configuration options may be referenced by your scripts later, so adding this section means you can test those locally before launching an EC2 instance.

Next, set the default SimpleDB domain to "default," which will be used in your Object Relational Mappings you'll experiment with later in this excerpt. For now, all you need to know is that this will store all your examples and tests in a domain called "default," and that you'll create this domain in the following testing section.

Finally, you set up a few configuration options for the Python logging module, which specifies that all logging should go to standard output, so you'll see it when running from a console. These configuration options can be custom configured to output the logging to a file, and any other format you may want, but for the basics here just dump it to your screen and show only log messages above the INFO level. If you encounter any issues, you can drop this down to DEBUG to see the raw queries being sent to AWS.

Testing It All
If you installed and configured boto as provided in the previous steps, you should be able to launch a Python instance and run the following sequence of commands:

>>> import boto
>>> sdb = boto.connect_sdb()
>>> sdb.create_domain("default")

The preceding code can test your connectivity to SimpleDB and create the default domain referenced in the previous configuration section. This can be useful in later sections of this excerpt, so make sure you don't get any errors. If you get an error message indicating you haven't signed up for the service, you need to go to the AWS portal and make sure to sign up for SimpleDB. If you get another error, you may have configured something incorrectly, so just check with that error to see what the problem may have been. If you're having issues, you can always head over to the boto home page: or ask for help in the boto users group:

What Does Your Application Need?
After you have the basic requirements for your application and decide what you need to implement, you can then begin to describe what you need to implement this application. Typically this is not a question that you think about when creating smaller scale applications because you have everything you need in a single box. Instead of looking at everything together as one complete unit or "box," you need to split out what you actually need and identify what cloud services you can use to fit these requirements. Typical applications need the following:

  • Compute power
  • Fast temporary storage
  • Large long-term storage
  • Small queryable long-term storage
  • Communication between components or modules

Think about this application as a typical nonstatic website that requires some sort of execution environment or web server, such as an e-commerce site or web blog. When a request comes in, you need to return an HTML page, or perhaps an XML or JSON representation of just the data, that may be either static or dynamically created. To determine this, you need to process the actual request using your compute power. This process also requires fast temporary storage to store the request and build the response. It may also require you to pull information about the users out of a queryable long-term storage location. After you look up the users' information, you may need to pull out some larger long-term storage information, such as a picture that they may have requested or a specific blog entry that is too large to store in a smaller queryable storage engine. If the users request to upload a picture, you may have to store that image in your larger long-term storage engine and then request that the image be resized to multiple sizes, so it may be used for a thumbnail image. Each of these requirements your application has on the backend may be solved by using services offered by your cloud provider.

If you expand this simple website to include any service, you can realize that all your applications need the same exact thing. If you split apart this application into multiple layers, you can begin to understand what it truly means to build SaaS, instead of just the typical desktop application. One major advantage of SaaS is that it lends itself to subscription-based software, which doesn't require complex licensing or distribution points, which not only cuts cost, but also ensures that you won't have to worry about pirating. Because you're actually providing a service, you're locking your clients into paying you every time they want to use the service. Clients also prefer this method because, just like with a cloud-hosting provider, they don't have to pay as much upfront, and they can typically buy in a small trial account to see if it will work for them. They also don't have to invest in any local hardware and can access their information and services from any Internet access. This type of application moves away from the requirements of having big applications on your client's systems to processing everything on your servers, which means clients need less money to get into your application.

Taking a look back at your website, you can see that there are three main layers of this application. This is commonly referred to as a three-tier application pattern and has been used for years to develop SaaS. The three layers include the data layer to store all your long-term needs, the application layer to process your data, and the client or presentation layer to present the data and the processes you can perform for your client.

Data Layer
The data layer is the base of your entire application, storing all the dynamic information for your application. In most applications, this is actually split into two parts. One part is the large, slow storage used to store any file-like objects or any data that is too large to store in a smaller storage system. This is typically provided for you by a network-attached-storage type of system provided by your cloud hosting solution. In Amazon Web Services, this is called Simple Storage Service or S3.

Another large part of this layer is the small, fast, and queryable information. In most typical systems, this is handled by a database. This is no different in cloud-based applications, except for how you host this database.

Introducing the AWS Databases
In Amazon Web Services, you actually have two different ways to host this database. One option is a nonrelational database, known as SimpleDB or SDB, which can be confusing initially to grasp but in general is much cheaper to run and scales automatically. This nonrelational database is currently the cheapest and easiest to scale database provided by Amazon Web Services because you don't have to pay anything except for what you actually use. As such, it can be considered a true cloud service, instead of just an adaptation on top of existing cloud services. Additionally, this database scales up to one billion key-value pairs per domain automatically, and you don't have to worry about over-using it because it's built using the same architecture as S3. This database is quite efficient at storing and retrieving data if you build your application to use with it, but if you're looking at doing complex queries, it doesn't handle that well. If you can think of your application in simple terms relating directly to objects, you can most likely use this database. If, however, you need something more complex, you need to use a Relational DB (RDB).

RDB is Amazon's solution for applications that cannot be built using SDB for systems with complex requirements of their databases, such as complex reporting, transactions, or stored procedures. If you need your application to do server-based reports that use complex select queries joining between multiple objects, or you need transactions or stored procedures, you probably need to use RDB. This new service is Amazon's solution to running your own MySQL database in the cloud and is actually nothing more than an Amazon-managed solution. You can use this solution if you're comfortable with using MySQL because it enables you to have Amazon manage your database for you, so you don't have to worry about any of the IT-level details. It has support for cloning, backing up, and restoring based on snapshots or points-in-time. In the near future, Amazon will be releasing support for more database engines and expanding its solutions to support high availability (write clustering) and read-only clustering.

If you can't figure out which solution you need to use, you can always use both. If you need the flexibility and power of SDB, use that for creating your objects, and then run scripts to push that data to MySQL for reporting purposes. In general, if you can use SDB, you probably should because it is generally a lot easier to use. SDB is split into a simple three-level hierarchy of domain, item, and key-value pairs. A domain is almost identical to a "database" in a typical relational DB; an Item can be thought of as a table that doesn't require any schema, and each item may have multiple key-value pairs below it that can be thought of as the columns and values in each item. Because SDB is schema-less, it doesn't require you to predefine the possible keys that can be under each item, so you can push multiple item types under the same domain.
Figure 1 illustrates the relation between the three levels.

In Figure 1, the connection between item to key-value pairs is a many-to-one relation, so you can have multiple key-value pairs for each item. Additionally, the keys are not unique, so you can have multiple key-value pairs with the same value, which is essentially the same thing as a key having multiple values.

Figure 1: The SDB hierarchy

Connecting to SDB
Connecting to SDB is quite easy using the boto communication library. Assuming you already have your boto configuration environment set up, all you need to do is use the proper connection methods:

>>> import boto
>>> sdb = boto.connect_sdb()
>>> db = sdb.get_domain("my_domain_name")
>>> db.get_item("item_name")

This returns a single item by its name, which is logically equivalent to selecting all attributes by an ID from a standard database. You can also perform simple queries on the database, as shown here:

>>>"SELECT * FROM `my_domain_name` WHERE `name`
ÂLIKE '%foo%' ORDER BY `name` DESC")

The preceding example works exactly like a standard relational DB query does, returning all attributes of any item that contains a key name that has foo in any location of any result, sorting by name in descending order. SDB sorts and operates by lexicographical comparison and handles only string values, so it doesn't understand that [nd]2 is less than [nd]1. The SDB documentation provides more details on this query language for more complex requests.

Using an Object Relational Mapping
boto also provides a simple persistence layer to translate all values so that they can be lexicographically sorted and searched for properly. This persistence layer operates much like the DB layer of Django, which it's based on. Designing an object is quite simple; you can read more about it in the boto documentation, but the basics can be seen here:

from boto.sdb.db.model import Model
from import StringProperty, IntegerProperty,
ReferenceProperty, ListProperty

class SimpleObject(Model):
"""A simple object to show how SDB
Persistence works in boto"""
name = StringProperty()
some_number = IntegerProperty()
multi_value_property = ListProperty(str)

class AnotherObject(Model):
"""A second SDB object used to show how references work"""
name = StringProperty()
object_link = ReferenceProperty(SimpleObject,

This code creates two classes (which can be thought of like tables) and a SimpleObject, which contains a name, number, and multi­valued property of strings. The number is automatically converted by adding the proper value to the value set and properly loaded back by subtracting this number. This conversion ensures that the number stored in SDB is always positive, so lexicographical sorting and comparison always works. The multivalue property acts just like a standard python list, enabling you to store multiple values in it and even removing values. Each time you save the object, everything that was in there is overridden. Each object also has an id property by default that is actually the name of the item because that is a unique ID. It uses Python's UUID module to generate this ID automatically if you don't manually set it. This UUID module generates completely random and unique strings, so you don't rely on a single point of failure to generate sequential numbers. The ­collection_name attribute on the object_link property of AnotherObject is optional but enables you to specify the property name that is automatically created on the SimpleObject. This reverse reference is generated for you automatically when you import the second object.

boto enables you to create and query on these objects in the database in another simple manor. It provides a few unique methods that use the values available in the SDB connection objects of boto for you so that you don't have to worry about building your query. To create an object, you can use the following code:

>>> my_obj = SimpleObject("object_id")
>>> = "My Object Name"
>>> my_obj.some_number = 1234
>>> my_obj.multi_value_property = ["foo", "bar"]
>>> my_obj.put()
>>> my_second_obj = AnotherObject()
>>> my_second_obj = "Second Object"
>>> my_second_obj.object_link = my_obj
>>> my_second_obj.put()

To create the link to the second object, you have to actually save the first object unless you specify the ID manually. If you don't specify an ID, it will be set automatically for you when you call the put method. In this example, the ID of the first object is set but not for the second object.

To select an object given an ID, you can use the following code:

>>> my_obj = SimpleObject.get_by_id("object_id")

This call returns an instance of the object and enables you to retrieve any of the attributes contained in it. There is also a "lazy" reference to the second object, which is not actually fetched until you specifically request it:

u'My Object Name'
>>> my_obj.some_number
>>> my_obj.multi_value_property
[u'foo', u'bar']
u'Second Object'

You call next() on the other_objects property because what's returned is actually a Query object. This object operates exactly like a generator and only performs the SDB query if you actually iterate over it. Because of this, you can't do something like this:

>>> my_obj.other_objects[0]

This feature is implemented for performance reasons because the query could actually be a list of thousands of records, and performing a SDB request would consume a lot of unnecessary resources unless you're actually looking for that property. Additionally, because it is a query, you can filter on it just like any other query:

>>> query = my_obj.other_objects
>>> query.filter("name like", "%Other")
>>> query.order("-name")
>>> for obj in query:


In the preceding code, you would then be looping over each object that has a name ending with Other, sorting in descending order on the name. After returning all matching results, a StopIteration exception is raised, which results in the loop terminating.

About Chris Moyer
Chris Moyer is a recent graduate of RIT, the Rochester Institute of Technology, with a bachelor’s degree in Software Engineering. He has more than five years of experience in programming with a main emphasis on cloud computing. Much of his time has been spent working on the popular boto client library, used for communicating with Amazon Web Services. Having studied under the creator of boto, Mitch Garnaat, Chris then went on to create two web frameworks based on this client library, known as Marajo and botoweb. He has also created large scaled applications based on those frameworks.

Chris is currently Vice President of Technology for Newstex, LLC, where he manages the technological development of migrating applications to the cloud, and he also manages his own department, which is actively maintaining and developing several applications. Chris lives with his wife, Lynn, in the New York area.

In order to post a comment you need to be registered and logged in.

Register | Sign-in

Reader Feedback: Page 1 of 1

Latest Cloud Developer Stories
In his session at 21st Cloud Expo, Michael Burley, a Senior Business Development Executive in IT Services at NetApp, described how NetApp designed a three-year program of work to migrate 25PB of a major telco's enterprise data to a new STaaS platform, and then secured a long-term...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high deman...
Despite being the market leader, we recognized the need to transform and reinvent our business at Dynatrace, before someone else disrupted the market. Over the course of three years, we changed everything - our technology, our culture and our brand image. In this session we'll di...
Cloud Storage 2.0 has brought many innovations, including the availability of cloud storage services that are less expensive and much faster than previous generations of cloud storage. Cloud Storage 2.0 has also delivered new and faster methods for migrating your premises storage...
DXWorldEXPO LLC announced today that Nutanix has been named "Platinum Sponsor" of CloudEXPO | DevOpsSUMMIT | DXWorldEXPO New York, which will take place November 12-13, 2018 in New York City. Nutanix makes infrastructure invisible, elevating IT to focus on the applications and se...
Subscribe to the World's Most Powerful Newsletters
Subscribe to Our Rss Feeds & Get Your SYS-CON News Live!
Click to Add our RSS Feeds to the Service of Your Choice:
Google Reader or Homepage Add to My Yahoo! Subscribe with Bloglines Subscribe in NewsGator Online
myFeedster Add to My AOL Subscribe in Rojo Add 'Hugg' to Newsburst from CNET Kinja Digest View Additional SYS-CON Feeds
Publish Your Article! Please send it to editorial(at)!

Advertise on this site! Contact advertising(at)! 201 802-3021

SYS-CON Featured Whitepapers
Most Read This Week